00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1992 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3258 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.053 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.080 Fetching changes from the remote Git repository 00:00:00.083 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.136 Using shallow fetch with depth 1 00:00:00.136 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.136 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.478 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.492 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.505 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:04.505 > git config core.sparsecheckout # timeout=10 00:00:04.519 > git read-tree -mu HEAD # timeout=10 00:00:04.536 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:04.556 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:04.557 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:04.665 [Pipeline] Start of Pipeline 00:00:04.685 [Pipeline] library 00:00:04.687 Loading library shm_lib@master 00:00:04.687 Library shm_lib@master is cached. Copying from home. 00:00:04.707 [Pipeline] node 00:00:19.719 Still waiting to schedule task 00:00:19.720 ‘CYP7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘FCP03’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘FCP04’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘FCP07’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘FCP08’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘FCP09’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘FCP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘FCP11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘FCP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP20’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP21’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP22’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘GP4’ is offline 00:00:19.720 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘ME2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘ME3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘VM-host-PE1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘VM-host-PE3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘VM-host-SM18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WCP0’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP17’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP36’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP37’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP38’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP49’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP63’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP68’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP69’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘WFP9’ is offline 00:00:19.720 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:19.720 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:07:20.755 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:20.758 [Pipeline] { 00:07:20.777 [Pipeline] catchError 00:07:20.780 [Pipeline] { 00:07:20.799 [Pipeline] wrap 00:07:20.811 [Pipeline] { 00:07:20.823 [Pipeline] stage 00:07:20.825 [Pipeline] { (Prologue) 00:07:21.013 [Pipeline] sh 00:07:21.286 + logger -p user.info -t JENKINS-CI 00:07:21.309 [Pipeline] echo 00:07:21.310 Node: GP2 00:07:21.323 [Pipeline] sh 00:07:21.629 [Pipeline] setCustomBuildProperty 00:07:21.645 [Pipeline] echo 00:07:21.647 Cleanup processes 00:07:21.654 [Pipeline] sh 00:07:21.937 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:21.938 1645807 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:21.954 [Pipeline] sh 00:07:22.238 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.238 ++ grep -v 'sudo pgrep' 00:07:22.238 ++ awk '{print $1}' 00:07:22.238 + sudo kill -9 00:07:22.238 + true 00:07:22.255 [Pipeline] cleanWs 00:07:22.266 [WS-CLEANUP] Deleting project workspace... 00:07:22.266 [WS-CLEANUP] Deferred wipeout is used... 00:07:22.274 [WS-CLEANUP] done 00:07:22.279 [Pipeline] setCustomBuildProperty 00:07:22.296 [Pipeline] sh 00:07:22.581 + sudo git config --global --replace-all safe.directory '*' 00:07:22.678 [Pipeline] httpRequest 00:07:22.701 [Pipeline] echo 00:07:22.703 Sorcerer 10.211.164.101 is alive 00:07:22.714 [Pipeline] httpRequest 00:07:22.720 HttpMethod: GET 00:07:22.720 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:07:22.721 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:07:22.723 Response Code: HTTP/1.1 200 OK 00:07:22.723 Success: Status code 200 is in the accepted range: 200,404 00:07:22.724 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:07:22.868 [Pipeline] sh 00:07:23.152 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:07:23.170 [Pipeline] httpRequest 00:07:23.189 [Pipeline] echo 00:07:23.191 Sorcerer 10.211.164.101 is alive 00:07:23.201 [Pipeline] httpRequest 00:07:23.206 HttpMethod: GET 00:07:23.206 URL: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:07:23.207 Sending request to url: http://10.211.164.101/packages/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:07:23.209 Response Code: HTTP/1.1 200 OK 00:07:23.210 Success: Status code 200 is in the accepted range: 200,404 00:07:23.210 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:07:25.374 [Pipeline] sh 00:07:25.655 + tar --no-same-owner -xf spdk_9937c0160db0c834d5fa91bc55689413b256518c.tar.gz 00:07:28.194 [Pipeline] sh 00:07:28.477 + git -C spdk log --oneline -n5 00:07:28.477 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:07:28.477 6c7c1f57e accel: add sequence outstanding stat 00:07:28.477 3bc8e6a26 accel: add utility to put task 00:07:28.477 2dba73997 accel: move get task utility 00:07:28.477 e45c8090e accel: improve accel sequence obj release 00:07:28.501 [Pipeline] withCredentials 00:07:28.535 > git --version # timeout=10 00:07:28.573 > git --version # 'git version 2.39.2' 00:07:28.589 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:07:28.591 [Pipeline] { 00:07:28.604 [Pipeline] retry 00:07:28.606 [Pipeline] { 00:07:28.618 [Pipeline] sh 00:07:28.899 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:07:28.910 [Pipeline] } 00:07:28.926 [Pipeline] // retry 00:07:28.930 [Pipeline] } 00:07:28.946 [Pipeline] // withCredentials 00:07:28.955 [Pipeline] httpRequest 00:07:28.972 [Pipeline] echo 00:07:28.974 Sorcerer 10.211.164.101 is alive 00:07:28.981 [Pipeline] httpRequest 00:07:28.985 HttpMethod: GET 00:07:28.985 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:07:28.985 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:07:28.987 Response Code: HTTP/1.1 200 OK 00:07:28.988 Success: Status code 200 is in the accepted range: 200,404 00:07:28.988 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:07:30.216 [Pipeline] sh 00:07:30.498 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:07:32.414 [Pipeline] sh 00:07:32.697 + git -C dpdk log --oneline -n5 00:07:32.697 caf0f5d395 version: 22.11.4 00:07:32.697 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:07:32.697 dc9c799c7d vhost: fix missing spinlock unlock 00:07:32.697 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:07:32.697 6ef77f2a5e net/gve: fix RX buffer size alignment 00:07:32.709 [Pipeline] } 00:07:32.731 [Pipeline] // stage 00:07:32.742 [Pipeline] stage 00:07:32.744 [Pipeline] { (Prepare) 00:07:32.770 [Pipeline] writeFile 00:07:32.788 [Pipeline] sh 00:07:33.071 + logger -p user.info -t JENKINS-CI 00:07:33.083 [Pipeline] sh 00:07:33.365 + logger -p user.info -t JENKINS-CI 00:07:33.374 [Pipeline] sh 00:07:33.651 + cat autorun-spdk.conf 00:07:33.651 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:33.651 SPDK_TEST_NVMF=1 00:07:33.652 SPDK_TEST_NVME_CLI=1 00:07:33.652 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:33.652 SPDK_TEST_NVMF_NICS=e810 00:07:33.652 SPDK_TEST_VFIOUSER=1 00:07:33.652 SPDK_RUN_UBSAN=1 00:07:33.652 NET_TYPE=phy 00:07:33.652 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:07:33.652 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:33.658 RUN_NIGHTLY=1 00:07:33.665 [Pipeline] readFile 00:07:33.692 [Pipeline] withEnv 00:07:33.694 [Pipeline] { 00:07:33.710 [Pipeline] sh 00:07:33.999 + set -ex 00:07:33.999 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:07:33.999 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:33.999 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:33.999 ++ SPDK_TEST_NVMF=1 00:07:33.999 ++ SPDK_TEST_NVME_CLI=1 00:07:33.999 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:33.999 ++ SPDK_TEST_NVMF_NICS=e810 00:07:33.999 ++ SPDK_TEST_VFIOUSER=1 00:07:33.999 ++ SPDK_RUN_UBSAN=1 00:07:33.999 ++ NET_TYPE=phy 00:07:33.999 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:07:33.999 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:33.999 ++ RUN_NIGHTLY=1 00:07:33.999 + case $SPDK_TEST_NVMF_NICS in 00:07:33.999 + DRIVERS=ice 00:07:33.999 + [[ tcp == \r\d\m\a ]] 00:07:33.999 + [[ -n ice ]] 00:07:33.999 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:07:33.999 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:07:33.999 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:07:33.999 rmmod: ERROR: Module irdma is not currently loaded 00:07:33.999 rmmod: ERROR: Module i40iw is not currently loaded 00:07:33.999 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:07:33.999 + true 00:07:33.999 + for D in $DRIVERS 00:07:33.999 + sudo modprobe ice 00:07:33.999 + exit 0 00:07:34.010 [Pipeline] } 00:07:34.029 [Pipeline] // withEnv 00:07:34.034 [Pipeline] } 00:07:34.051 [Pipeline] // stage 00:07:34.060 [Pipeline] catchError 00:07:34.062 [Pipeline] { 00:07:34.077 [Pipeline] timeout 00:07:34.077 Timeout set to expire in 50 min 00:07:34.078 [Pipeline] { 00:07:34.091 [Pipeline] stage 00:07:34.093 [Pipeline] { (Tests) 00:07:34.111 [Pipeline] sh 00:07:34.391 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:34.391 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:34.391 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:34.391 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:07:34.391 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.391 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:07:34.391 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:07:34.391 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:07:34.391 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:07:34.391 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:07:34.391 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:07:34.391 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:34.391 + source /etc/os-release 00:07:34.391 ++ NAME='Fedora Linux' 00:07:34.391 ++ VERSION='38 (Cloud Edition)' 00:07:34.391 ++ ID=fedora 00:07:34.391 ++ VERSION_ID=38 00:07:34.391 ++ VERSION_CODENAME= 00:07:34.391 ++ PLATFORM_ID=platform:f38 00:07:34.391 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:07:34.391 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:34.391 ++ LOGO=fedora-logo-icon 00:07:34.391 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:07:34.391 ++ HOME_URL=https://fedoraproject.org/ 00:07:34.391 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:07:34.391 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:34.391 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:34.391 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:34.391 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:07:34.391 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:34.391 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:07:34.391 ++ SUPPORT_END=2024-05-14 00:07:34.391 ++ VARIANT='Cloud Edition' 00:07:34.391 ++ VARIANT_ID=cloud 00:07:34.391 + uname -a 00:07:34.391 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:07:34.391 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:35.328 Hugepages 00:07:35.328 node hugesize free / total 00:07:35.328 node0 1048576kB 0 / 0 00:07:35.328 node0 2048kB 0 / 0 00:07:35.328 node1 1048576kB 0 / 0 00:07:35.328 node1 2048kB 0 / 0 00:07:35.328 00:07:35.328 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:35.328 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:07:35.328 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:07:35.328 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:07:35.328 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:07:35.328 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:07:35.328 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:07:35.328 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:07:35.328 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:07:35.328 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:07:35.328 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:07:35.328 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:07:35.328 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:07:35.328 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:07:35.328 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:07:35.328 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:07:35.328 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:07:35.328 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:35.328 + rm -f /tmp/spdk-ld-path 00:07:35.328 + source autorun-spdk.conf 00:07:35.328 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:35.328 ++ SPDK_TEST_NVMF=1 00:07:35.328 ++ SPDK_TEST_NVME_CLI=1 00:07:35.328 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:35.328 ++ SPDK_TEST_NVMF_NICS=e810 00:07:35.328 ++ SPDK_TEST_VFIOUSER=1 00:07:35.328 ++ SPDK_RUN_UBSAN=1 00:07:35.328 ++ NET_TYPE=phy 00:07:35.328 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:07:35.328 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:35.328 ++ RUN_NIGHTLY=1 00:07:35.328 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:35.328 + [[ -n '' ]] 00:07:35.328 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:35.328 + for M in /var/spdk/build-*-manifest.txt 00:07:35.328 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:35.328 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:35.328 + for M in /var/spdk/build-*-manifest.txt 00:07:35.328 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:35.328 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:35.328 ++ uname 00:07:35.328 + [[ Linux == \L\i\n\u\x ]] 00:07:35.328 + sudo dmesg -T 00:07:35.587 + sudo dmesg --clear 00:07:35.587 + dmesg_pid=1646405 00:07:35.587 + [[ Fedora Linux == FreeBSD ]] 00:07:35.587 + sudo dmesg -Tw 00:07:35.587 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:35.587 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:35.587 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:35.587 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:07:35.587 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:07:35.587 + [[ -x /usr/src/fio-static/fio ]] 00:07:35.587 + export FIO_BIN=/usr/src/fio-static/fio 00:07:35.587 + FIO_BIN=/usr/src/fio-static/fio 00:07:35.587 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:35.587 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:35.587 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:35.587 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:35.587 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:35.587 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:35.587 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:35.587 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:35.587 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:35.587 Test configuration: 00:07:35.587 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:35.587 SPDK_TEST_NVMF=1 00:07:35.587 SPDK_TEST_NVME_CLI=1 00:07:35.587 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:35.587 SPDK_TEST_NVMF_NICS=e810 00:07:35.587 SPDK_TEST_VFIOUSER=1 00:07:35.587 SPDK_RUN_UBSAN=1 00:07:35.587 NET_TYPE=phy 00:07:35.587 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:07:35.587 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:35.587 RUN_NIGHTLY=1 02:13:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.587 02:13:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:35.587 02:13:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.587 02:13:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.587 02:13:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.587 02:13:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.587 02:13:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.587 02:13:25 -- paths/export.sh@5 -- $ export PATH 00:07:35.587 02:13:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.587 02:13:25 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:35.587 02:13:25 -- common/autobuild_common.sh@444 -- $ date +%s 00:07:35.587 02:13:25 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720656805.XXXXXX 00:07:35.587 02:13:25 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720656805.2cbI2D 00:07:35.587 02:13:25 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:07:35.587 02:13:25 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:07:35.587 02:13:25 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:35.587 02:13:25 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:07:35.587 02:13:25 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:35.587 02:13:25 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:35.587 02:13:25 -- common/autobuild_common.sh@460 -- $ get_config_params 00:07:35.587 02:13:25 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:07:35.587 02:13:25 -- common/autotest_common.sh@10 -- $ set +x 00:07:35.587 02:13:25 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:07:35.587 02:13:25 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:07:35.587 02:13:25 -- pm/common@17 -- $ local monitor 00:07:35.587 02:13:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:35.587 02:13:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:35.587 02:13:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:35.587 02:13:25 -- pm/common@21 -- $ date +%s 00:07:35.587 02:13:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:35.587 02:13:25 -- pm/common@21 -- $ date +%s 00:07:35.588 02:13:25 -- pm/common@25 -- $ sleep 1 00:07:35.588 02:13:25 -- pm/common@21 -- $ date +%s 00:07:35.588 02:13:25 -- pm/common@21 -- $ date +%s 00:07:35.588 02:13:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720656805 00:07:35.588 02:13:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720656805 00:07:35.588 02:13:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720656805 00:07:35.588 02:13:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720656805 00:07:35.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720656805_collect-vmstat.pm.log 00:07:35.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720656805_collect-cpu-load.pm.log 00:07:35.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720656805_collect-cpu-temp.pm.log 00:07:35.588 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720656805_collect-bmc-pm.bmc.pm.log 00:07:36.525 02:13:26 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:07:36.525 02:13:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:36.525 02:13:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:36.525 02:13:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:36.525 02:13:26 -- spdk/autobuild.sh@16 -- $ date -u 00:07:36.525 Thu Jul 11 12:13:26 AM UTC 2024 00:07:36.525 02:13:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:36.525 v24.09-pre-200-g9937c0160 00:07:36.525 02:13:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:36.525 02:13:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:36.525 02:13:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:36.525 02:13:26 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:07:36.525 02:13:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:07:36.525 02:13:26 -- common/autotest_common.sh@10 -- $ set +x 00:07:36.525 ************************************ 00:07:36.525 START TEST ubsan 00:07:36.525 ************************************ 00:07:36.525 02:13:26 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:07:36.525 using ubsan 00:07:36.525 00:07:36.525 real 0m0.000s 00:07:36.525 user 0m0.000s 00:07:36.525 sys 0m0.000s 00:07:36.525 02:13:26 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:07:36.525 02:13:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:36.525 ************************************ 00:07:36.525 END TEST ubsan 00:07:36.525 ************************************ 00:07:36.525 02:13:26 -- common/autotest_common.sh@1142 -- $ return 0 00:07:36.525 02:13:26 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:07:36.525 02:13:26 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:07:36.525 02:13:26 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:07:36.525 02:13:26 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:07:36.525 02:13:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:07:36.525 02:13:26 -- common/autotest_common.sh@10 -- $ set +x 00:07:36.525 ************************************ 00:07:36.525 START TEST build_native_dpdk 00:07:36.525 ************************************ 00:07:36.525 02:13:26 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:07:36.525 02:13:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:07:36.784 02:13:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:07:36.784 02:13:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:07:36.784 02:13:26 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:07:36.785 caf0f5d395 version: 22.11.4 00:07:36.785 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:07:36.785 dc9c799c7d vhost: fix missing spinlock unlock 00:07:36.785 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:07:36.785 6ef77f2a5e net/gve: fix RX buffer size alignment 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:07:36.785 02:13:26 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:07:36.785 patching file config/rte_config.h 00:07:36.785 Hunk #1 succeeded at 60 (offset 1 line). 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:07:36.785 02:13:26 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:07:41.021 The Meson build system 00:07:41.021 Version: 1.3.1 00:07:41.021 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:07:41.021 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:07:41.021 Build type: native build 00:07:41.021 Program cat found: YES (/usr/bin/cat) 00:07:41.021 Project name: DPDK 00:07:41.021 Project version: 22.11.4 00:07:41.021 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:07:41.021 C linker for the host machine: gcc ld.bfd 2.39-16 00:07:41.021 Host machine cpu family: x86_64 00:07:41.021 Host machine cpu: x86_64 00:07:41.021 Message: ## Building in Developer Mode ## 00:07:41.021 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:41.021 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:07:41.021 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:07:41.021 Program objdump found: YES (/usr/bin/objdump) 00:07:41.021 Program python3 found: YES (/usr/bin/python3) 00:07:41.021 Program cat found: YES (/usr/bin/cat) 00:07:41.021 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:07:41.021 Checking for size of "void *" : 8 00:07:41.021 Checking for size of "void *" : 8 (cached) 00:07:41.021 Library m found: YES 00:07:41.021 Library numa found: YES 00:07:41.021 Has header "numaif.h" : YES 00:07:41.021 Library fdt found: NO 00:07:41.021 Library execinfo found: NO 00:07:41.021 Has header "execinfo.h" : YES 00:07:41.021 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:07:41.021 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:41.021 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:41.021 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:41.021 Run-time dependency openssl found: YES 3.0.9 00:07:41.021 Run-time dependency libpcap found: YES 1.10.4 00:07:41.021 Has header "pcap.h" with dependency libpcap: YES 00:07:41.021 Compiler for C supports arguments -Wcast-qual: YES 00:07:41.021 Compiler for C supports arguments -Wdeprecated: YES 00:07:41.021 Compiler for C supports arguments -Wformat: YES 00:07:41.021 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:41.021 Compiler for C supports arguments -Wformat-security: NO 00:07:41.021 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:41.021 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:41.021 Compiler for C supports arguments -Wnested-externs: YES 00:07:41.021 Compiler for C supports arguments -Wold-style-definition: YES 00:07:41.021 Compiler for C supports arguments -Wpointer-arith: YES 00:07:41.021 Compiler for C supports arguments -Wsign-compare: YES 00:07:41.021 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:41.021 Compiler for C supports arguments -Wundef: YES 00:07:41.021 Compiler for C supports arguments -Wwrite-strings: YES 00:07:41.021 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:41.021 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:41.021 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:41.021 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:41.021 Compiler for C supports arguments -mavx512f: YES 00:07:41.021 Checking if "AVX512 checking" compiles: YES 00:07:41.022 Fetching value of define "__SSE4_2__" : 1 00:07:41.022 Fetching value of define "__AES__" : 1 00:07:41.022 Fetching value of define "__AVX__" : 1 00:07:41.022 Fetching value of define "__AVX2__" : (undefined) 00:07:41.022 Fetching value of define "__AVX512BW__" : (undefined) 00:07:41.022 Fetching value of define "__AVX512CD__" : (undefined) 00:07:41.022 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:41.022 Fetching value of define "__AVX512F__" : (undefined) 00:07:41.022 Fetching value of define "__AVX512VL__" : (undefined) 00:07:41.022 Fetching value of define "__PCLMUL__" : 1 00:07:41.022 Fetching value of define "__RDRND__" : (undefined) 00:07:41.022 Fetching value of define "__RDSEED__" : (undefined) 00:07:41.022 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:41.022 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:41.022 Message: lib/kvargs: Defining dependency "kvargs" 00:07:41.022 Message: lib/telemetry: Defining dependency "telemetry" 00:07:41.022 Checking for function "getentropy" : YES 00:07:41.022 Message: lib/eal: Defining dependency "eal" 00:07:41.022 Message: lib/ring: Defining dependency "ring" 00:07:41.022 Message: lib/rcu: Defining dependency "rcu" 00:07:41.022 Message: lib/mempool: Defining dependency "mempool" 00:07:41.022 Message: lib/mbuf: Defining dependency "mbuf" 00:07:41.022 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:41.022 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:41.022 Compiler for C supports arguments -mpclmul: YES 00:07:41.022 Compiler for C supports arguments -maes: YES 00:07:41.022 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:41.022 Compiler for C supports arguments -mavx512bw: YES 00:07:41.022 Compiler for C supports arguments -mavx512dq: YES 00:07:41.022 Compiler for C supports arguments -mavx512vl: YES 00:07:41.022 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:41.022 Compiler for C supports arguments -mavx2: YES 00:07:41.022 Compiler for C supports arguments -mavx: YES 00:07:41.022 Message: lib/net: Defining dependency "net" 00:07:41.022 Message: lib/meter: Defining dependency "meter" 00:07:41.022 Message: lib/ethdev: Defining dependency "ethdev" 00:07:41.022 Message: lib/pci: Defining dependency "pci" 00:07:41.022 Message: lib/cmdline: Defining dependency "cmdline" 00:07:41.022 Message: lib/metrics: Defining dependency "metrics" 00:07:41.022 Message: lib/hash: Defining dependency "hash" 00:07:41.022 Message: lib/timer: Defining dependency "timer" 00:07:41.022 Fetching value of define "__AVX2__" : (undefined) (cached) 00:07:41.022 Compiler for C supports arguments -mavx2: YES (cached) 00:07:41.022 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:41.022 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:07:41.022 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:07:41.022 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:07:41.022 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:07:41.022 Message: lib/acl: Defining dependency "acl" 00:07:41.022 Message: lib/bbdev: Defining dependency "bbdev" 00:07:41.022 Message: lib/bitratestats: Defining dependency "bitratestats" 00:07:41.022 Run-time dependency libelf found: YES 0.190 00:07:41.022 Message: lib/bpf: Defining dependency "bpf" 00:07:41.022 Message: lib/cfgfile: Defining dependency "cfgfile" 00:07:41.022 Message: lib/compressdev: Defining dependency "compressdev" 00:07:41.022 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:41.022 Message: lib/distributor: Defining dependency "distributor" 00:07:41.022 Message: lib/efd: Defining dependency "efd" 00:07:41.022 Message: lib/eventdev: Defining dependency "eventdev" 00:07:41.022 Message: lib/gpudev: Defining dependency "gpudev" 00:07:41.022 Message: lib/gro: Defining dependency "gro" 00:07:41.022 Message: lib/gso: Defining dependency "gso" 00:07:41.022 Message: lib/ip_frag: Defining dependency "ip_frag" 00:07:41.022 Message: lib/jobstats: Defining dependency "jobstats" 00:07:41.022 Message: lib/latencystats: Defining dependency "latencystats" 00:07:41.022 Message: lib/lpm: Defining dependency "lpm" 00:07:41.022 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:41.022 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:07:41.022 Fetching value of define "__AVX512IFMA__" : (undefined) 00:07:41.022 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:07:41.022 Message: lib/member: Defining dependency "member" 00:07:41.022 Message: lib/pcapng: Defining dependency "pcapng" 00:07:41.022 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:41.022 Message: lib/power: Defining dependency "power" 00:07:41.022 Message: lib/rawdev: Defining dependency "rawdev" 00:07:41.022 Message: lib/regexdev: Defining dependency "regexdev" 00:07:41.022 Message: lib/dmadev: Defining dependency "dmadev" 00:07:41.022 Message: lib/rib: Defining dependency "rib" 00:07:41.022 Message: lib/reorder: Defining dependency "reorder" 00:07:41.022 Message: lib/sched: Defining dependency "sched" 00:07:41.022 Message: lib/security: Defining dependency "security" 00:07:41.022 Message: lib/stack: Defining dependency "stack" 00:07:41.022 Has header "linux/userfaultfd.h" : YES 00:07:41.022 Message: lib/vhost: Defining dependency "vhost" 00:07:41.022 Message: lib/ipsec: Defining dependency "ipsec" 00:07:41.022 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:41.022 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:07:41.022 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:07:41.022 Compiler for C supports arguments -mavx512bw: YES (cached) 00:07:41.022 Message: lib/fib: Defining dependency "fib" 00:07:41.022 Message: lib/port: Defining dependency "port" 00:07:41.022 Message: lib/pdump: Defining dependency "pdump" 00:07:41.022 Message: lib/table: Defining dependency "table" 00:07:41.022 Message: lib/pipeline: Defining dependency "pipeline" 00:07:41.022 Message: lib/graph: Defining dependency "graph" 00:07:41.022 Message: lib/node: Defining dependency "node" 00:07:41.022 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:41.022 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:41.022 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:41.022 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:41.022 Compiler for C supports arguments -Wno-sign-compare: YES 00:07:41.022 Compiler for C supports arguments -Wno-unused-value: YES 00:07:42.401 Compiler for C supports arguments -Wno-format: YES 00:07:42.401 Compiler for C supports arguments -Wno-format-security: YES 00:07:42.401 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:07:42.401 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:07:42.401 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:07:42.401 Compiler for C supports arguments -Wno-unused-parameter: YES 00:07:42.401 Fetching value of define "__AVX2__" : (undefined) (cached) 00:07:42.401 Compiler for C supports arguments -mavx2: YES (cached) 00:07:42.401 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:42.401 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:42.401 Compiler for C supports arguments -mavx512bw: YES (cached) 00:07:42.401 Compiler for C supports arguments -march=skylake-avx512: YES 00:07:42.401 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:07:42.401 Program doxygen found: YES (/usr/bin/doxygen) 00:07:42.401 Configuring doxy-api.conf using configuration 00:07:42.401 Program sphinx-build found: NO 00:07:42.401 Configuring rte_build_config.h using configuration 00:07:42.401 Message: 00:07:42.401 ================= 00:07:42.401 Applications Enabled 00:07:42.401 ================= 00:07:42.401 00:07:42.401 apps: 00:07:42.401 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:07:42.401 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:07:42.401 test-security-perf, 00:07:42.401 00:07:42.401 Message: 00:07:42.401 ================= 00:07:42.401 Libraries Enabled 00:07:42.401 ================= 00:07:42.401 00:07:42.401 libs: 00:07:42.401 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:07:42.401 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:07:42.401 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:07:42.401 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:07:42.401 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:07:42.401 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:07:42.401 table, pipeline, graph, node, 00:07:42.401 00:07:42.401 Message: 00:07:42.401 =============== 00:07:42.401 Drivers Enabled 00:07:42.401 =============== 00:07:42.401 00:07:42.401 common: 00:07:42.401 00:07:42.401 bus: 00:07:42.401 pci, vdev, 00:07:42.401 mempool: 00:07:42.401 ring, 00:07:42.401 dma: 00:07:42.401 00:07:42.401 net: 00:07:42.401 i40e, 00:07:42.401 raw: 00:07:42.401 00:07:42.401 crypto: 00:07:42.401 00:07:42.401 compress: 00:07:42.401 00:07:42.401 regex: 00:07:42.401 00:07:42.401 vdpa: 00:07:42.401 00:07:42.401 event: 00:07:42.401 00:07:42.401 baseband: 00:07:42.401 00:07:42.401 gpu: 00:07:42.401 00:07:42.401 00:07:42.401 Message: 00:07:42.401 ================= 00:07:42.401 Content Skipped 00:07:42.401 ================= 00:07:42.401 00:07:42.401 apps: 00:07:42.401 00:07:42.401 libs: 00:07:42.401 kni: explicitly disabled via build config (deprecated lib) 00:07:42.401 flow_classify: explicitly disabled via build config (deprecated lib) 00:07:42.401 00:07:42.401 drivers: 00:07:42.401 common/cpt: not in enabled drivers build config 00:07:42.401 common/dpaax: not in enabled drivers build config 00:07:42.401 common/iavf: not in enabled drivers build config 00:07:42.401 common/idpf: not in enabled drivers build config 00:07:42.401 common/mvep: not in enabled drivers build config 00:07:42.401 common/octeontx: not in enabled drivers build config 00:07:42.401 bus/auxiliary: not in enabled drivers build config 00:07:42.401 bus/dpaa: not in enabled drivers build config 00:07:42.401 bus/fslmc: not in enabled drivers build config 00:07:42.401 bus/ifpga: not in enabled drivers build config 00:07:42.401 bus/vmbus: not in enabled drivers build config 00:07:42.401 common/cnxk: not in enabled drivers build config 00:07:42.401 common/mlx5: not in enabled drivers build config 00:07:42.401 common/qat: not in enabled drivers build config 00:07:42.401 common/sfc_efx: not in enabled drivers build config 00:07:42.401 mempool/bucket: not in enabled drivers build config 00:07:42.401 mempool/cnxk: not in enabled drivers build config 00:07:42.401 mempool/dpaa: not in enabled drivers build config 00:07:42.401 mempool/dpaa2: not in enabled drivers build config 00:07:42.401 mempool/octeontx: not in enabled drivers build config 00:07:42.401 mempool/stack: not in enabled drivers build config 00:07:42.401 dma/cnxk: not in enabled drivers build config 00:07:42.401 dma/dpaa: not in enabled drivers build config 00:07:42.401 dma/dpaa2: not in enabled drivers build config 00:07:42.401 dma/hisilicon: not in enabled drivers build config 00:07:42.401 dma/idxd: not in enabled drivers build config 00:07:42.401 dma/ioat: not in enabled drivers build config 00:07:42.401 dma/skeleton: not in enabled drivers build config 00:07:42.401 net/af_packet: not in enabled drivers build config 00:07:42.401 net/af_xdp: not in enabled drivers build config 00:07:42.401 net/ark: not in enabled drivers build config 00:07:42.401 net/atlantic: not in enabled drivers build config 00:07:42.401 net/avp: not in enabled drivers build config 00:07:42.401 net/axgbe: not in enabled drivers build config 00:07:42.401 net/bnx2x: not in enabled drivers build config 00:07:42.401 net/bnxt: not in enabled drivers build config 00:07:42.401 net/bonding: not in enabled drivers build config 00:07:42.401 net/cnxk: not in enabled drivers build config 00:07:42.401 net/cxgbe: not in enabled drivers build config 00:07:42.401 net/dpaa: not in enabled drivers build config 00:07:42.401 net/dpaa2: not in enabled drivers build config 00:07:42.401 net/e1000: not in enabled drivers build config 00:07:42.401 net/ena: not in enabled drivers build config 00:07:42.401 net/enetc: not in enabled drivers build config 00:07:42.401 net/enetfec: not in enabled drivers build config 00:07:42.401 net/enic: not in enabled drivers build config 00:07:42.401 net/failsafe: not in enabled drivers build config 00:07:42.401 net/fm10k: not in enabled drivers build config 00:07:42.401 net/gve: not in enabled drivers build config 00:07:42.401 net/hinic: not in enabled drivers build config 00:07:42.401 net/hns3: not in enabled drivers build config 00:07:42.401 net/iavf: not in enabled drivers build config 00:07:42.401 net/ice: not in enabled drivers build config 00:07:42.401 net/idpf: not in enabled drivers build config 00:07:42.401 net/igc: not in enabled drivers build config 00:07:42.401 net/ionic: not in enabled drivers build config 00:07:42.401 net/ipn3ke: not in enabled drivers build config 00:07:42.401 net/ixgbe: not in enabled drivers build config 00:07:42.401 net/kni: not in enabled drivers build config 00:07:42.401 net/liquidio: not in enabled drivers build config 00:07:42.401 net/mana: not in enabled drivers build config 00:07:42.401 net/memif: not in enabled drivers build config 00:07:42.401 net/mlx4: not in enabled drivers build config 00:07:42.401 net/mlx5: not in enabled drivers build config 00:07:42.401 net/mvneta: not in enabled drivers build config 00:07:42.401 net/mvpp2: not in enabled drivers build config 00:07:42.401 net/netvsc: not in enabled drivers build config 00:07:42.401 net/nfb: not in enabled drivers build config 00:07:42.401 net/nfp: not in enabled drivers build config 00:07:42.401 net/ngbe: not in enabled drivers build config 00:07:42.401 net/null: not in enabled drivers build config 00:07:42.401 net/octeontx: not in enabled drivers build config 00:07:42.401 net/octeon_ep: not in enabled drivers build config 00:07:42.401 net/pcap: not in enabled drivers build config 00:07:42.401 net/pfe: not in enabled drivers build config 00:07:42.401 net/qede: not in enabled drivers build config 00:07:42.401 net/ring: not in enabled drivers build config 00:07:42.401 net/sfc: not in enabled drivers build config 00:07:42.401 net/softnic: not in enabled drivers build config 00:07:42.401 net/tap: not in enabled drivers build config 00:07:42.401 net/thunderx: not in enabled drivers build config 00:07:42.401 net/txgbe: not in enabled drivers build config 00:07:42.401 net/vdev_netvsc: not in enabled drivers build config 00:07:42.401 net/vhost: not in enabled drivers build config 00:07:42.401 net/virtio: not in enabled drivers build config 00:07:42.401 net/vmxnet3: not in enabled drivers build config 00:07:42.401 raw/cnxk_bphy: not in enabled drivers build config 00:07:42.401 raw/cnxk_gpio: not in enabled drivers build config 00:07:42.401 raw/dpaa2_cmdif: not in enabled drivers build config 00:07:42.401 raw/ifpga: not in enabled drivers build config 00:07:42.401 raw/ntb: not in enabled drivers build config 00:07:42.401 raw/skeleton: not in enabled drivers build config 00:07:42.401 crypto/armv8: not in enabled drivers build config 00:07:42.401 crypto/bcmfs: not in enabled drivers build config 00:07:42.401 crypto/caam_jr: not in enabled drivers build config 00:07:42.401 crypto/ccp: not in enabled drivers build config 00:07:42.401 crypto/cnxk: not in enabled drivers build config 00:07:42.401 crypto/dpaa_sec: not in enabled drivers build config 00:07:42.401 crypto/dpaa2_sec: not in enabled drivers build config 00:07:42.401 crypto/ipsec_mb: not in enabled drivers build config 00:07:42.401 crypto/mlx5: not in enabled drivers build config 00:07:42.401 crypto/mvsam: not in enabled drivers build config 00:07:42.401 crypto/nitrox: not in enabled drivers build config 00:07:42.401 crypto/null: not in enabled drivers build config 00:07:42.401 crypto/octeontx: not in enabled drivers build config 00:07:42.401 crypto/openssl: not in enabled drivers build config 00:07:42.401 crypto/scheduler: not in enabled drivers build config 00:07:42.402 crypto/uadk: not in enabled drivers build config 00:07:42.402 crypto/virtio: not in enabled drivers build config 00:07:42.402 compress/isal: not in enabled drivers build config 00:07:42.402 compress/mlx5: not in enabled drivers build config 00:07:42.402 compress/octeontx: not in enabled drivers build config 00:07:42.402 compress/zlib: not in enabled drivers build config 00:07:42.402 regex/mlx5: not in enabled drivers build config 00:07:42.402 regex/cn9k: not in enabled drivers build config 00:07:42.402 vdpa/ifc: not in enabled drivers build config 00:07:42.402 vdpa/mlx5: not in enabled drivers build config 00:07:42.402 vdpa/sfc: not in enabled drivers build config 00:07:42.402 event/cnxk: not in enabled drivers build config 00:07:42.402 event/dlb2: not in enabled drivers build config 00:07:42.402 event/dpaa: not in enabled drivers build config 00:07:42.402 event/dpaa2: not in enabled drivers build config 00:07:42.402 event/dsw: not in enabled drivers build config 00:07:42.402 event/opdl: not in enabled drivers build config 00:07:42.402 event/skeleton: not in enabled drivers build config 00:07:42.402 event/sw: not in enabled drivers build config 00:07:42.402 event/octeontx: not in enabled drivers build config 00:07:42.402 baseband/acc: not in enabled drivers build config 00:07:42.402 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:07:42.402 baseband/fpga_lte_fec: not in enabled drivers build config 00:07:42.402 baseband/la12xx: not in enabled drivers build config 00:07:42.402 baseband/null: not in enabled drivers build config 00:07:42.402 baseband/turbo_sw: not in enabled drivers build config 00:07:42.402 gpu/cuda: not in enabled drivers build config 00:07:42.402 00:07:42.402 00:07:42.402 Build targets in project: 316 00:07:42.402 00:07:42.402 DPDK 22.11.4 00:07:42.402 00:07:42.402 User defined options 00:07:42.402 libdir : lib 00:07:42.402 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:42.402 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:07:42.402 c_link_args : 00:07:42.402 enable_docs : false 00:07:42.402 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:07:42.402 enable_kmods : false 00:07:42.402 machine : native 00:07:42.402 tests : false 00:07:42.402 00:07:42.402 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:42.402 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:07:42.670 02:13:32 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j32 00:07:42.670 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:07:42.670 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:07:42.670 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:07:42.670 [3/745] Generating lib/rte_telemetry_def with a custom command 00:07:42.670 [4/745] Generating lib/rte_kvargs_def with a custom command 00:07:42.670 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:42.670 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:42.670 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:42.931 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:42.931 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:42.931 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:42.931 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:42.931 [12/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:42.931 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:42.931 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:42.931 [15/745] Linking static target lib/librte_kvargs.a 00:07:42.931 [16/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:42.931 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:42.931 [18/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:42.931 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:42.931 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:42.931 [21/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:42.931 [22/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:42.931 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:42.931 [24/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:42.931 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:42.931 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:42.931 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:43.192 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:43.192 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:43.192 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:43.192 [31/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:07:43.192 [32/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:43.192 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:43.192 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:43.192 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:43.192 [36/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:43.192 [37/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:43.192 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:43.192 [39/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:43.192 [40/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:43.192 [41/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:43.192 [42/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:07:43.192 [43/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:43.192 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:43.192 [45/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:43.192 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:43.192 [47/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:43.192 [48/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:43.192 [49/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:43.192 [50/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:43.192 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:43.192 [52/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:43.192 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:43.192 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:43.192 [55/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:43.192 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:43.192 [57/745] Generating lib/rte_eal_mingw with a custom command 00:07:43.192 [58/745] Generating lib/rte_eal_def with a custom command 00:07:43.459 [59/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:43.459 [60/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:43.459 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:43.459 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:43.459 [63/745] Generating lib/rte_ring_def with a custom command 00:07:43.459 [64/745] Generating lib/rte_ring_mingw with a custom command 00:07:43.459 [65/745] Generating lib/rte_rcu_mingw with a custom command 00:07:43.459 [66/745] Generating lib/rte_rcu_def with a custom command 00:07:43.459 [67/745] Linking target lib/librte_kvargs.so.23.0 00:07:43.459 [68/745] Generating lib/rte_mempool_def with a custom command 00:07:43.459 [69/745] Generating lib/rte_mempool_mingw with a custom command 00:07:43.459 [70/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:43.459 [71/745] Generating lib/rte_mbuf_def with a custom command 00:07:43.459 [72/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:43.459 [73/745] Generating lib/rte_mbuf_mingw with a custom command 00:07:43.459 [74/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:43.459 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:43.459 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:43.459 [77/745] Generating lib/rte_net_def with a custom command 00:07:43.459 [78/745] Generating lib/rte_net_mingw with a custom command 00:07:43.459 [79/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:43.459 [80/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:43.718 [81/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:43.718 [82/745] Linking static target lib/librte_ring.a 00:07:43.718 [83/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:07:43.718 [84/745] Generating lib/rte_meter_mingw with a custom command 00:07:43.718 [85/745] Generating lib/rte_meter_def with a custom command 00:07:43.718 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:43.718 [87/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:43.718 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:43.718 [89/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:43.718 [90/745] Linking static target lib/librte_telemetry.a 00:07:43.983 [91/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:43.983 [92/745] Linking static target lib/librte_meter.a 00:07:43.983 [93/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:43.983 [94/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:43.983 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:44.242 [96/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:44.242 [97/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:44.242 [98/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.508 [99/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.508 [100/745] Linking target lib/librte_telemetry.so.23.0 00:07:44.508 [101/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:44.508 [102/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:44.508 [103/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:44.508 [104/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:44.508 [105/745] Generating lib/rte_ethdev_def with a custom command 00:07:44.508 [106/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:44.508 [107/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:44.768 [108/745] Generating lib/rte_ethdev_mingw with a custom command 00:07:44.768 [109/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:44.768 [110/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:07:44.768 [111/745] Generating lib/rte_pci_mingw with a custom command 00:07:44.768 [112/745] Generating lib/rte_pci_def with a custom command 00:07:44.768 [113/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:44.768 [114/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:44.768 [115/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:44.768 [116/745] Linking static target lib/librte_pci.a 00:07:44.768 [117/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:44.768 [118/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:44.768 [119/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:45.029 [120/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:45.029 [121/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:45.029 [122/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:45.029 [123/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:45.029 [124/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:45.029 [125/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:45.029 [126/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:45.029 [127/745] Generating lib/rte_cmdline_def with a custom command 00:07:45.029 [128/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:45.029 [129/745] Generating lib/rte_cmdline_mingw with a custom command 00:07:45.029 [130/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:45.029 [131/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:45.029 [132/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:45.029 [133/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:45.029 [134/745] Generating lib/rte_metrics_mingw with a custom command 00:07:45.029 [135/745] Generating lib/rte_metrics_def with a custom command 00:07:45.029 [136/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:45.029 [137/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.029 [138/745] Linking static target lib/librte_net.a 00:07:45.029 [139/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:45.292 [140/745] Generating lib/rte_hash_def with a custom command 00:07:45.292 [141/745] Generating lib/rte_timer_def with a custom command 00:07:45.292 [142/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:45.292 [143/745] Generating lib/rte_hash_mingw with a custom command 00:07:45.292 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:45.292 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:45.292 [146/745] Generating lib/rte_timer_mingw with a custom command 00:07:45.292 [147/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:45.292 [148/745] Linking static target lib/librte_rcu.a 00:07:45.292 [149/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:45.292 [150/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:07:45.292 [151/745] Generating lib/rte_acl_def with a custom command 00:07:45.292 [152/745] Generating lib/rte_acl_mingw with a custom command 00:07:45.292 [153/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:45.292 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:45.557 [155/745] Generating lib/rte_bbdev_def with a custom command 00:07:45.557 [156/745] Generating lib/rte_bbdev_mingw with a custom command 00:07:45.557 [157/745] Generating lib/rte_bitratestats_def with a custom command 00:07:45.557 [158/745] Generating lib/rte_bitratestats_mingw with a custom command 00:07:45.557 [159/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.557 [160/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:45.557 [161/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:45.557 [162/745] Linking static target lib/librte_mempool.a 00:07:45.557 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:45.820 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:45.820 [165/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:45.820 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:45.820 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:45.820 [168/745] Linking static target lib/librte_eal.a 00:07:45.820 [169/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.820 [170/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:46.082 [171/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:46.082 [172/745] Linking static target lib/librte_timer.a 00:07:46.082 [173/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:46.082 [174/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:46.082 [175/745] Linking static target lib/librte_cmdline.a 00:07:46.082 [176/745] Generating lib/rte_bpf_def with a custom command 00:07:46.082 [177/745] Generating lib/rte_bpf_mingw with a custom command 00:07:46.345 [178/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:46.345 [179/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:07:46.345 [180/745] Linking static target lib/librte_metrics.a 00:07:46.345 [181/745] Generating lib/rte_cfgfile_def with a custom command 00:07:46.345 [182/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:07:46.345 [183/745] Generating lib/rte_cfgfile_mingw with a custom command 00:07:46.605 [184/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.605 [185/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:07:46.605 [186/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:07:46.605 [187/745] Linking static target lib/librte_bitratestats.a 00:07:46.605 [188/745] Generating lib/rte_compressdev_def with a custom command 00:07:46.605 [189/745] Generating lib/rte_compressdev_mingw with a custom command 00:07:46.868 [190/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:07:46.868 [191/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:07:46.868 [192/745] Linking static target lib/librte_cfgfile.a 00:07:46.868 [193/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:07:46.868 [194/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.868 [195/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.868 [196/745] Generating lib/rte_cryptodev_def with a custom command 00:07:46.868 [197/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:07:46.868 [198/745] Generating lib/rte_cryptodev_mingw with a custom command 00:07:46.868 [199/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:07:47.132 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:07:47.132 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:07:47.132 [202/745] Generating lib/rte_distributor_def with a custom command 00:07:47.132 [203/745] Generating lib/rte_distributor_mingw with a custom command 00:07:47.132 [204/745] Generating lib/rte_efd_def with a custom command 00:07:47.132 [205/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:07:47.132 [206/745] Generating lib/rte_efd_mingw with a custom command 00:07:47.132 [207/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:07:47.132 [208/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:07:47.392 [209/745] Linking static target lib/librte_bbdev.a 00:07:47.392 [210/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:07:47.392 [211/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:07:47.392 [212/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:47.392 [213/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:07:47.655 [214/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:47.655 [215/745] Generating lib/rte_eventdev_def with a custom command 00:07:47.655 [216/745] Generating lib/rte_eventdev_mingw with a custom command 00:07:47.921 [217/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:07:47.921 [218/745] Generating lib/rte_gpudev_def with a custom command 00:07:47.921 [219/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:47.921 [220/745] Generating lib/rte_gpudev_mingw with a custom command 00:07:47.921 [221/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:48.182 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:48.182 [223/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:07:48.182 [224/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:48.182 [225/745] Generating lib/rte_gro_def with a custom command 00:07:48.182 [226/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:48.444 [227/745] Generating lib/rte_gro_mingw with a custom command 00:07:48.444 [228/745] Linking static target lib/librte_compressdev.a 00:07:48.444 [229/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:07:48.444 [230/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:07:48.444 [231/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:07:48.444 [232/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:07:48.707 [233/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:07:48.707 [234/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:48.707 [235/745] Linking static target lib/librte_bpf.a 00:07:48.707 [236/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:07:48.966 [237/745] Generating lib/rte_gso_def with a custom command 00:07:48.966 [238/745] Generating lib/rte_gso_mingw with a custom command 00:07:48.966 [239/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:49.228 [240/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:07:49.228 [241/745] Linking static target lib/librte_distributor.a 00:07:49.228 [242/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:07:49.490 [243/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:07:49.490 [244/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:07:49.490 [245/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:07:49.490 [246/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:07:49.750 [247/745] Generating lib/rte_ip_frag_def with a custom command 00:07:49.750 [248/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:07:49.750 [249/745] Linking static target lib/librte_gpudev.a 00:07:49.750 [250/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:07:49.750 [251/745] Generating lib/rte_ip_frag_mingw with a custom command 00:07:49.750 [252/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:07:49.750 [253/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:07:49.750 [254/745] Generating lib/rte_jobstats_def with a custom command 00:07:49.750 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:07:49.750 [256/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:49.750 [257/745] Generating lib/rte_jobstats_mingw with a custom command 00:07:49.750 [258/745] Generating lib/rte_latencystats_def with a custom command 00:07:49.750 [259/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:07:49.750 [260/745] Generating lib/rte_latencystats_mingw with a custom command 00:07:49.750 [261/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:49.750 [262/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:07:50.013 [263/745] Generating lib/rte_lpm_def with a custom command 00:07:50.013 [264/745] Generating lib/rte_lpm_mingw with a custom command 00:07:50.013 [265/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:07:50.013 [266/745] Linking static target lib/librte_jobstats.a 00:07:50.013 [267/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:07:50.013 [268/745] Linking static target lib/librte_gro.a 00:07:50.013 [269/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:07:50.277 [270/745] Generating lib/rte_member_def with a custom command 00:07:50.277 [271/745] Generating lib/rte_member_mingw with a custom command 00:07:50.277 [272/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:07:50.277 [273/745] Generating lib/rte_pcapng_def with a custom command 00:07:50.277 [274/745] Generating lib/rte_pcapng_mingw with a custom command 00:07:50.277 [275/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:07:50.546 [276/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:07:50.546 [277/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:07:50.546 [278/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:07:50.546 [279/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:50.546 [280/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:50.805 [281/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:07:50.805 [282/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:50.805 [283/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:50.805 [284/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:07:50.805 [285/745] Linking static target lib/acl/libavx2_tmp.a 00:07:50.805 [286/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:50.805 [287/745] Linking static target lib/librte_ethdev.a 00:07:51.072 [288/745] Generating lib/rte_power_def with a custom command 00:07:51.072 [289/745] Generating lib/rte_power_mingw with a custom command 00:07:51.072 [290/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:07:51.072 [291/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:07:51.072 [292/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:51.072 [293/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:07:51.072 [294/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:07:51.072 [295/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:07:51.072 [296/745] Generating lib/rte_rawdev_def with a custom command 00:07:51.072 [297/745] Generating lib/rte_rawdev_mingw with a custom command 00:07:51.072 [298/745] Generating lib/rte_regexdev_def with a custom command 00:07:51.072 [299/745] Generating lib/rte_regexdev_mingw with a custom command 00:07:51.072 [300/745] Generating lib/rte_dmadev_def with a custom command 00:07:51.072 [301/745] Generating lib/rte_dmadev_mingw with a custom command 00:07:51.072 [302/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:07:51.072 [303/745] Linking static target lib/librte_efd.a 00:07:51.072 [304/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:07:51.336 [305/745] Generating lib/rte_rib_mingw with a custom command 00:07:51.336 [306/745] Generating lib/rte_rib_def with a custom command 00:07:51.336 [307/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:07:51.336 [308/745] Generating lib/rte_reorder_def with a custom command 00:07:51.336 [309/745] Generating lib/rte_reorder_mingw with a custom command 00:07:51.336 [310/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:07:51.336 [311/745] Linking static target lib/librte_latencystats.a 00:07:51.336 [312/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:07:51.336 [313/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:07:51.600 [314/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:07:51.600 [315/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:07:51.600 [316/745] Generating lib/rte_sched_def with a custom command 00:07:51.600 [317/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:07:51.600 [318/745] Generating lib/rte_sched_mingw with a custom command 00:07:51.600 [319/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:07:51.600 [320/745] Linking static target lib/librte_gso.a 00:07:51.600 [321/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:07:51.600 [322/745] Linking static target lib/acl/libavx512_tmp.a 00:07:51.600 [323/745] Linking static target lib/librte_acl.a 00:07:51.600 [324/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:07:51.600 [325/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:51.600 [326/745] Generating lib/rte_security_def with a custom command 00:07:51.600 [327/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:07:51.600 [328/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:07:51.600 [329/745] Generating lib/rte_security_mingw with a custom command 00:07:51.600 [330/745] Linking static target lib/librte_hash.a 00:07:51.600 [331/745] Linking static target lib/librte_ip_frag.a 00:07:51.600 [332/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:07:51.600 [333/745] Generating lib/rte_stack_def with a custom command 00:07:51.866 [334/745] Generating lib/rte_stack_mingw with a custom command 00:07:51.866 [335/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:07:51.866 [336/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:51.866 [337/745] Linking static target lib/librte_rawdev.a 00:07:51.866 [338/745] Linking static target lib/librte_mbuf.a 00:07:51.866 [339/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:07:51.866 [340/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:07:51.866 [341/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:07:51.866 [342/745] Linking static target lib/librte_stack.a 00:07:51.866 [343/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:07:51.866 [344/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:51.866 [345/745] Linking static target lib/librte_dmadev.a 00:07:51.866 [346/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:52.127 [347/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.127 [348/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.127 [349/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.127 [350/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:07:52.127 [351/745] Generating lib/rte_vhost_def with a custom command 00:07:52.391 [352/745] Generating lib/rte_vhost_mingw with a custom command 00:07:52.391 [353/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:52.391 [354/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:07:52.391 [355/745] Linking static target lib/librte_pcapng.a 00:07:52.654 [356/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.654 [357/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:07:52.654 [358/745] Linking static target lib/librte_regexdev.a 00:07:52.654 [359/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:07:52.654 [360/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.654 [361/745] Linking static target lib/librte_lpm.a 00:07:52.654 [362/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:07:52.654 [363/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.654 [364/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:07:52.654 [365/745] Generating lib/rte_ipsec_def with a custom command 00:07:52.915 [366/745] Generating lib/rte_ipsec_mingw with a custom command 00:07:52.915 [367/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.915 [368/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.915 [369/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:52.915 [370/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:52.915 [371/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:07:53.180 [372/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:53.180 [373/745] Linking static target lib/librte_reorder.a 00:07:53.180 [374/745] Generating lib/rte_fib_def with a custom command 00:07:53.180 [375/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:07:53.180 [376/745] Generating lib/rte_fib_mingw with a custom command 00:07:53.180 [377/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:53.180 [378/745] Linking static target lib/librte_power.a 00:07:53.180 [379/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.180 [380/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:53.439 [381/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:53.439 [382/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:53.439 [383/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.439 [384/745] Linking static target lib/librte_security.a 00:07:53.702 [385/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.702 [386/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:07:53.702 [387/745] Linking static target lib/librte_rib.a 00:07:53.966 [388/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:53.966 [389/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:07:53.966 [390/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:07:53.966 [391/745] Linking static target lib/librte_eventdev.a 00:07:53.966 [392/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:07:54.227 [393/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.227 [394/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:07:54.227 [395/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:07:54.227 [396/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:07:54.227 [397/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:07:54.227 [398/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:07:54.227 [399/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:07:54.227 [400/745] Generating lib/rte_port_def with a custom command 00:07:54.227 [401/745] Generating lib/rte_port_mingw with a custom command 00:07:54.227 [402/745] Generating lib/rte_pdump_def with a custom command 00:07:54.493 [403/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.493 [404/745] Generating lib/rte_pdump_mingw with a custom command 00:07:54.493 [405/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:07:54.760 [406/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.760 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:07:54.760 [408/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:54.760 [409/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:07:54.760 [410/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:07:54.760 [411/745] Linking static target lib/librte_cryptodev.a 00:07:54.760 [412/745] Linking static target lib/librte_member.a 00:07:55.019 [413/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:07:55.019 [414/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:55.019 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:07:55.284 [416/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:07:55.284 [417/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:07:55.284 [418/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:07:55.284 [419/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:07:55.284 [420/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:07:55.284 [421/745] Linking static target lib/librte_sched.a 00:07:55.284 [422/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:07:55.546 [423/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:07:55.813 [424/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:07:55.813 [425/745] Linking static target lib/librte_fib.a 00:07:56.074 [426/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.074 [427/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:07:56.074 [428/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:07:56.075 [429/745] Generating lib/rte_table_def with a custom command 00:07:56.075 [430/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:56.075 [431/745] Generating lib/rte_table_mingw with a custom command 00:07:56.075 [432/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:07:56.338 [433/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:07:56.338 [434/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:07:56.338 [435/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:07:56.338 [436/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.338 [437/745] Generating lib/rte_pipeline_mingw with a custom command 00:07:56.338 [438/745] Generating lib/rte_pipeline_def with a custom command 00:07:56.338 [439/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:07:56.599 [440/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:07:56.599 [441/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:07:56.599 [442/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:07:56.599 [443/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:07:56.862 [444/745] Generating lib/rte_graph_def with a custom command 00:07:56.862 [445/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:07:56.862 [446/745] Generating lib/rte_graph_mingw with a custom command 00:07:56.862 [447/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:07:56.862 [448/745] Linking static target lib/librte_ipsec.a 00:07:56.862 [449/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:56.862 [450/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:07:57.124 [451/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:07:57.124 [452/745] Linking static target lib/librte_pdump.a 00:07:57.124 [453/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:07:57.386 [454/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:07:57.386 [455/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.386 [456/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:07:57.647 [457/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:07:57.647 [458/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.647 [459/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:07:57.647 [460/745] Generating lib/rte_node_def with a custom command 00:07:57.647 [461/745] Generating lib/rte_node_mingw with a custom command 00:07:57.909 [462/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:07:57.909 [463/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:07:57.909 [464/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:07:57.909 [465/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:07:57.909 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:07:57.909 [467/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:57.909 [468/745] Generating drivers/rte_bus_pci_def with a custom command 00:07:57.909 [469/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:07:57.909 [470/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:07:57.909 [471/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:57.909 [472/745] Generating drivers/rte_bus_vdev_def with a custom command 00:07:58.171 [473/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:07:58.171 [474/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:07:58.171 [475/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:07:58.171 [476/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:07:58.171 [477/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:07:58.171 [478/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:07:58.171 [479/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:58.171 [480/745] Linking static target lib/librte_table.a 00:07:58.171 [481/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:58.171 [482/745] Generating drivers/rte_mempool_ring_def with a custom command 00:07:58.171 [483/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:07:58.171 [484/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:58.171 [485/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:58.171 [486/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:58.171 [487/745] Linking target lib/librte_eal.so.23.0 00:07:58.433 [488/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:58.433 [489/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:58.433 [490/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:07:58.433 [491/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:58.433 [492/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:07:58.710 [493/745] Linking target lib/librte_ring.so.23.0 00:07:58.710 [494/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:58.710 [495/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:07:58.710 [496/745] Linking target lib/librte_meter.so.23.0 00:07:58.710 [497/745] Linking target lib/librte_pci.so.23.0 00:07:58.710 [498/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:07:58.710 [499/745] Linking target lib/librte_timer.so.23.0 00:07:58.987 [500/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:07:58.987 [501/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:58.987 [502/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:07:58.987 [503/745] Linking target lib/librte_rcu.so.23.0 00:07:58.987 [504/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:07:58.987 [505/745] Linking target lib/librte_mempool.so.23.0 00:07:58.987 [506/745] Linking target lib/librte_cfgfile.so.23.0 00:07:58.987 [507/745] Linking target lib/librte_acl.so.23.0 00:07:58.987 [508/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:07:58.987 [509/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:07:58.987 [510/745] Linking target lib/librte_jobstats.so.23.0 00:07:58.987 [511/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:07:58.987 [512/745] Linking target lib/librte_rawdev.so.23.0 00:07:58.987 [513/745] Linking static target lib/librte_graph.a 00:07:58.987 [514/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:07:59.259 [515/745] Linking target lib/librte_dmadev.so.23.0 00:07:59.259 [516/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:59.259 [517/745] Linking static target drivers/librte_bus_vdev.a 00:07:59.259 [518/745] Linking target lib/librte_stack.so.23.0 00:07:59.259 [519/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:07:59.259 [520/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:59.260 [521/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:07:59.260 [522/745] Linking target lib/librte_mbuf.so.23.0 00:07:59.260 [523/745] Linking target lib/librte_rib.so.23.0 00:07:59.260 [524/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:07:59.260 [525/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:07:59.260 [526/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:07:59.260 [527/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:07:59.260 [528/745] Linking static target lib/librte_port.a 00:07:59.260 [529/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:07:59.519 [530/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:59.519 [531/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:59.519 [532/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:07:59.519 [533/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:07:59.519 [534/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:07:59.519 [535/745] Linking target lib/librte_fib.so.23.0 00:07:59.519 [536/745] Linking target lib/librte_net.so.23.0 00:07:59.519 [537/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:59.519 [538/745] Linking target lib/librte_bbdev.so.23.0 00:07:59.519 [539/745] Linking target lib/librte_compressdev.so.23.0 00:07:59.519 [540/745] Linking target lib/librte_cryptodev.so.23.0 00:07:59.779 [541/745] Linking target lib/librte_distributor.so.23.0 00:07:59.779 [542/745] Linking target lib/librte_gpudev.so.23.0 00:07:59.779 [543/745] Linking target lib/librte_regexdev.so.23.0 00:07:59.779 [544/745] Linking target lib/librte_reorder.so.23.0 00:07:59.779 [545/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:07:59.779 [546/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:07:59.779 [547/745] Linking target lib/librte_sched.so.23.0 00:07:59.779 [548/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:59.779 [549/745] Linking target drivers/librte_bus_vdev.so.23.0 00:07:59.779 [550/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:07:59.779 [551/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:59.779 [552/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:07:59.779 [553/745] Linking static target drivers/librte_bus_pci.a 00:07:59.779 [554/745] Linking target lib/librte_hash.so.23.0 00:07:59.779 [555/745] Linking target lib/librte_cmdline.so.23.0 00:07:59.779 [556/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:59.779 [557/745] Linking target lib/librte_ethdev.so.23.0 00:08:00.044 [558/745] Linking target lib/librte_security.so.23.0 00:08:00.044 [559/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:08:00.044 [560/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:08:00.044 [561/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:08:00.044 [562/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:08:00.303 [563/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:08:00.303 [564/745] Generating drivers/rte_net_i40e_def with a custom command 00:08:00.303 [565/745] Linking target lib/librte_efd.so.23.0 00:08:00.303 [566/745] Linking target lib/librte_metrics.so.23.0 00:08:00.303 [567/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:08:00.303 [568/745] Linking target lib/librte_bpf.so.23.0 00:08:00.303 [569/745] Linking target lib/librte_gro.so.23.0 00:08:00.303 [570/745] Linking target lib/librte_eventdev.so.23.0 00:08:00.303 [571/745] Linking target lib/librte_gso.so.23.0 00:08:00.303 [572/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:08:00.303 [573/745] Linking target lib/librte_ip_frag.so.23.0 00:08:00.303 [574/745] Linking target lib/librte_lpm.so.23.0 00:08:00.303 [575/745] Linking target lib/librte_member.so.23.0 00:08:00.565 [576/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:08:00.565 [577/745] Linking target lib/librte_pcapng.so.23.0 00:08:00.565 [578/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:00.565 [579/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:08:00.565 [580/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:08:00.565 [581/745] Linking target lib/librte_bitratestats.so.23.0 00:08:00.565 [582/745] Linking target lib/librte_latencystats.so.23.0 00:08:00.565 [583/745] Linking target lib/librte_ipsec.so.23.0 00:08:00.565 [584/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:08:00.565 [585/745] Linking target lib/librte_power.so.23.0 00:08:00.565 [586/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:08:00.565 [587/745] Linking target drivers/librte_bus_pci.so.23.0 00:08:00.565 [588/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:08:00.826 [589/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:08:00.826 [590/745] Linking target lib/librte_port.so.23.0 00:08:00.826 [591/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:08:00.826 [592/745] Linking target lib/librte_graph.so.23.0 00:08:00.826 [593/745] Linking target lib/librte_pdump.so.23.0 00:08:00.826 [594/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:08:01.090 [595/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:08:01.090 [596/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:01.090 [597/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:08:01.090 [598/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:01.090 [599/745] Linking target lib/librte_table.so.23.0 00:08:01.090 [600/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:08:01.090 [601/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:08:01.353 [602/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:08:01.353 [603/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:08:01.353 [604/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:08:01.353 [605/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:08:01.353 [606/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:01.353 [607/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:01.353 [608/745] Linking static target drivers/librte_mempool_ring.a 00:08:01.615 [609/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:01.615 [610/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:08:01.615 [611/745] Linking target drivers/librte_mempool_ring.so.23.0 00:08:01.615 [612/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:08:01.879 [613/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:08:01.879 [614/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:08:01.879 [615/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:08:02.451 [616/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:08:02.451 [617/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:08:02.712 [618/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:08:02.712 [619/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:08:02.712 [620/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:08:02.973 [621/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:08:03.237 [622/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:08:03.237 [623/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:08:03.237 [624/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:08:03.237 [625/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:08:03.237 [626/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:08:03.237 [627/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:08:03.237 [628/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:08:03.237 [629/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:08:03.237 [630/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:08:03.497 [631/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:08:03.755 [632/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:08:03.755 [633/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:08:04.020 [634/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:08:04.020 [635/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:08:04.279 [636/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:08:04.542 [637/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:08:04.542 [638/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:08:04.801 [639/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:08:04.801 [640/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:08:04.801 [641/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:08:05.739 [642/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:08:05.739 [643/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:08:05.739 [644/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:08:05.739 [645/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:08:05.739 [646/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:08:06.006 [647/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:08:06.269 [648/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:08:06.269 [649/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:08:06.532 [650/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:08:06.793 [651/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:08:06.793 [652/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:08:07.057 [653/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:08:07.057 [654/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:08:07.057 [655/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:08:07.057 [656/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:08:07.057 [657/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:08:07.057 [658/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:08:07.323 [659/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:08:07.323 [660/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:08:07.323 [661/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:08:07.323 [662/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:08:07.582 [663/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:08:07.582 [664/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:08:07.582 [665/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:08:07.582 [666/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:08:07.846 [667/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:08:07.846 [668/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:08:08.106 [669/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:08:08.106 [670/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:08:08.106 [671/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:08:08.683 [672/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:08:08.683 [673/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:08:08.683 [674/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:08:08.947 [675/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:08:08.947 [676/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:08:09.211 [677/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:08:09.211 [678/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:08:09.211 [679/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:08:09.470 [680/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:08:09.470 [681/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:08:09.470 [682/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:08:09.729 [683/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:08:09.729 [684/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:08:09.729 [685/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:08:09.729 [686/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:08:09.729 [687/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:08:09.988 [688/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:08:09.988 [689/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:08:09.988 [690/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:08:09.988 [691/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:08:10.247 [692/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:08:10.506 [693/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:08:10.506 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:08:10.506 [695/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:08:10.507 [696/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:08:10.507 [697/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:08:10.507 [698/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:08:10.507 [699/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:08:10.507 [700/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:08:10.507 [701/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:08:10.507 [702/745] Linking static target drivers/librte_net_i40e.a 00:08:10.766 [703/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:08:10.766 [704/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:08:10.766 [705/745] Linking static target lib/librte_node.a 00:08:10.766 [706/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:08:11.025 [707/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:08:11.025 [708/745] Linking target lib/librte_node.so.23.0 00:08:11.025 [709/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:08:11.283 [710/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:08:11.283 [711/745] Linking target drivers/librte_net_i40e.so.23.0 00:08:11.284 [712/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:08:11.284 [713/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:08:11.850 [714/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:08:12.109 [715/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:08:12.367 [716/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:08:12.625 [717/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:08:14.000 [718/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:08:15.374 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:08:20.668 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:52.732 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:52.732 [722/745] Linking static target lib/librte_vhost.a 00:08:52.990 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:52.990 [724/745] Linking target lib/librte_vhost.so.23.0 00:09:11.066 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:09:11.066 [726/745] Linking static target lib/librte_pipeline.a 00:09:11.066 [727/745] Linking target app/dpdk-dumpcap 00:09:11.066 [728/745] Linking target app/dpdk-proc-info 00:09:11.066 [729/745] Linking target app/dpdk-test-acl 00:09:11.066 [730/745] Linking target app/dpdk-test-cmdline 00:09:11.066 [731/745] Linking target app/dpdk-test-pipeline 00:09:11.066 [732/745] Linking target app/dpdk-test-compress-perf 00:09:11.066 [733/745] Linking target app/dpdk-test-security-perf 00:09:11.066 [734/745] Linking target app/dpdk-test-flow-perf 00:09:11.066 [735/745] Linking target app/dpdk-test-regex 00:09:11.066 [736/745] Linking target app/dpdk-test-gpudev 00:09:11.066 [737/745] Linking target app/dpdk-test-crypto-perf 00:09:11.066 [738/745] Linking target app/dpdk-test-bbdev 00:09:11.066 [739/745] Linking target app/dpdk-pdump 00:09:11.066 [740/745] Linking target app/dpdk-test-fib 00:09:11.066 [741/745] Linking target app/dpdk-testpmd 00:09:11.066 [742/745] Linking target app/dpdk-test-sad 00:09:11.066 [743/745] Linking target app/dpdk-test-eventdev 00:09:12.442 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:12.442 [745/745] Linking target lib/librte_pipeline.so.23.0 00:09:12.442 02:15:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:09:12.442 02:15:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:09:12.442 02:15:02 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j32 install 00:09:12.442 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:09:12.442 [0/1] Installing files. 00:09:13.014 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:09:13.014 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:09:13.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:13.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:09:13.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:09:13.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:09:13.019 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.019 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.019 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.020 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:09:13.592 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:09:13.592 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:09:13.592 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.592 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:09:13.592 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.592 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.593 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:09:13.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:09:13.597 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:09:13.597 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:09:13.597 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:09:13.597 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:09:13.597 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:09:13.597 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:09:13.597 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:09:13.597 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:09:13.597 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:09:13.597 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:09:13.597 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:09:13.597 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:09:13.597 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:09:13.597 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:09:13.597 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:09:13.597 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:09:13.597 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:09:13.597 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:09:13.597 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:09:13.597 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:09:13.597 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:09:13.597 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:09:13.597 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:09:13.597 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:09:13.597 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:09:13.597 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:09:13.597 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:09:13.597 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:09:13.597 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:09:13.597 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:09:13.597 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:09:13.597 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:09:13.597 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:09:13.597 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:09:13.597 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:09:13.597 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:09:13.597 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:09:13.597 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:09:13.597 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:09:13.597 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:09:13.597 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:09:13.597 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:09:13.597 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:09:13.597 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:09:13.597 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:09:13.597 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:09:13.597 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:09:13.598 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:09:13.598 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:09:13.598 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:09:13.598 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:09:13.598 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:09:13.598 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:09:13.598 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:09:13.598 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:09:13.598 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:09:13.598 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:09:13.598 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:09:13.598 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:09:13.598 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:09:13.598 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:09:13.598 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:09:13.598 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:09:13.598 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:09:13.598 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:09:13.598 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:09:13.598 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:09:13.598 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:09:13.598 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:09:13.598 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:09:13.598 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:09:13.598 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:09:13.598 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:09:13.598 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:09:13.598 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:09:13.598 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:09:13.598 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:09:13.598 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:09:13.598 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:09:13.598 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:09:13.598 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:09:13.598 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:09:13.598 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:09:13.598 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:09:13.598 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:09:13.598 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:09:13.598 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:09:13.598 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:09:13.598 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:09:13.598 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:09:13.598 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:09:13.598 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:09:13.598 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:09:13.598 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:09:13.598 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:09:13.598 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:09:13.598 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:09:13.598 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:09:13.598 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:09:13.598 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:09:13.598 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:09:13.598 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:09:13.598 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:09:13.598 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:09:13.598 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:09:13.598 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:09:13.598 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:09:13.598 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:09:13.598 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:09:13.598 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:09:13.598 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:09:13.598 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:09:13.598 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:09:13.598 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:09:13.598 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:09:13.598 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:09:13.598 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:09:13.598 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:09:13.598 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:09:13.598 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:09:13.598 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:09:13.598 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:09:13.598 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:09:13.598 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:09:13.598 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:09:13.598 02:15:03 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:09:13.598 02:15:03 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:13.598 00:09:13.598 real 1m36.975s 00:09:13.598 user 15m9.911s 00:09:13.598 sys 1m47.035s 00:09:13.598 02:15:03 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:09:13.599 02:15:03 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:09:13.599 ************************************ 00:09:13.599 END TEST build_native_dpdk 00:09:13.599 ************************************ 00:09:13.599 02:15:03 -- common/autotest_common.sh@1142 -- $ return 0 00:09:13.599 02:15:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:09:13.599 02:15:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:09:13.599 02:15:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:09:13.599 02:15:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:09:13.599 02:15:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:09:13.599 02:15:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:09:13.599 02:15:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:09:13.599 02:15:03 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:09:13.857 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:09:13.857 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:09:13.857 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:09:13.857 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:14.116 Using 'verbs' RDMA provider 00:09:24.665 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:09:36.910 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:09:36.910 Creating mk/config.mk...done. 00:09:36.910 Creating mk/cc.flags.mk...done. 00:09:36.910 Type 'make' to build. 00:09:36.910 02:15:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:09:36.910 02:15:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:09:36.910 02:15:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:09:36.910 02:15:25 -- common/autotest_common.sh@10 -- $ set +x 00:09:36.910 ************************************ 00:09:36.910 START TEST make 00:09:36.910 ************************************ 00:09:36.910 02:15:25 make -- common/autotest_common.sh@1123 -- $ make -j32 00:09:36.910 make[1]: Nothing to be done for 'all'. 00:09:37.485 The Meson build system 00:09:37.485 Version: 1.3.1 00:09:37.485 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:09:37.485 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:09:37.485 Build type: native build 00:09:37.485 Project name: libvfio-user 00:09:37.485 Project version: 0.0.1 00:09:37.485 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:37.485 C linker for the host machine: gcc ld.bfd 2.39-16 00:09:37.485 Host machine cpu family: x86_64 00:09:37.485 Host machine cpu: x86_64 00:09:37.485 Run-time dependency threads found: YES 00:09:37.485 Library dl found: YES 00:09:37.485 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:37.485 Run-time dependency json-c found: YES 0.17 00:09:37.485 Run-time dependency cmocka found: YES 1.1.7 00:09:37.485 Program pytest-3 found: NO 00:09:37.485 Program flake8 found: NO 00:09:37.485 Program misspell-fixer found: NO 00:09:37.485 Program restructuredtext-lint found: NO 00:09:37.485 Program valgrind found: YES (/usr/bin/valgrind) 00:09:37.485 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:37.485 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:37.485 Compiler for C supports arguments -Wwrite-strings: YES 00:09:37.485 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:09:37.485 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:09:37.485 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:09:37.485 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:09:37.485 Build targets in project: 8 00:09:37.485 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:09:37.485 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:09:37.485 00:09:37.485 libvfio-user 0.0.1 00:09:37.485 00:09:37.485 User defined options 00:09:37.485 buildtype : debug 00:09:37.485 default_library: shared 00:09:37.485 libdir : /usr/local/lib 00:09:37.485 00:09:37.485 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:38.450 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:09:38.450 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:09:38.450 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:09:38.450 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:09:38.450 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:09:38.450 [5/37] Compiling C object samples/null.p/null.c.o 00:09:38.450 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:09:38.450 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:09:38.708 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:09:38.708 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:09:38.708 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:09:38.708 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:09:38.708 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:09:38.708 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:09:38.708 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:09:38.708 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:09:38.708 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:09:38.708 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:09:38.708 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:09:38.708 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:09:38.708 [20/37] Compiling C object samples/server.p/server.c.o 00:09:38.708 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:09:38.708 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:09:38.708 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:09:38.708 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:09:38.708 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:09:38.708 [26/37] Compiling C object samples/client.p/client.c.o 00:09:38.969 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:09:38.969 [28/37] Linking target samples/client 00:09:38.969 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:09:38.969 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:09:38.969 [31/37] Linking target test/unit_tests 00:09:39.226 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:09:39.226 [33/37] Linking target samples/lspci 00:09:39.226 [34/37] Linking target samples/server 00:09:39.226 [35/37] Linking target samples/gpio-pci-idio-16 00:09:39.226 [36/37] Linking target samples/shadow_ioeventfd_server 00:09:39.226 [37/37] Linking target samples/null 00:09:39.226 INFO: autodetecting backend as ninja 00:09:39.226 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:09:39.226 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:09:40.168 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:09:40.168 ninja: no work to do. 00:09:55.058 CC lib/ut_mock/mock.o 00:09:55.058 CC lib/ut/ut.o 00:09:55.058 CC lib/log/log.o 00:09:55.058 CC lib/log/log_flags.o 00:09:55.058 CC lib/log/log_deprecated.o 00:09:55.058 LIB libspdk_log.a 00:09:55.058 LIB libspdk_ut.a 00:09:55.058 LIB libspdk_ut_mock.a 00:09:55.058 SO libspdk_ut_mock.so.6.0 00:09:55.058 SO libspdk_ut.so.2.0 00:09:55.058 SO libspdk_log.so.7.0 00:09:55.058 SYMLINK libspdk_ut.so 00:09:55.058 SYMLINK libspdk_ut_mock.so 00:09:55.058 SYMLINK libspdk_log.so 00:09:55.058 CC lib/ioat/ioat.o 00:09:55.058 CC lib/util/base64.o 00:09:55.058 CC lib/dma/dma.o 00:09:55.058 CC lib/util/bit_array.o 00:09:55.058 CXX lib/trace_parser/trace.o 00:09:55.058 CC lib/util/cpuset.o 00:09:55.058 CC lib/util/crc16.o 00:09:55.058 CC lib/util/crc32.o 00:09:55.058 CC lib/util/crc32c.o 00:09:55.058 CC lib/util/crc32_ieee.o 00:09:55.058 CC lib/util/crc64.o 00:09:55.058 CC lib/util/dif.o 00:09:55.058 CC lib/util/file.o 00:09:55.058 CC lib/util/fd.o 00:09:55.058 CC lib/util/hexlify.o 00:09:55.058 CC lib/util/iov.o 00:09:55.058 CC lib/util/math.o 00:09:55.058 CC lib/util/pipe.o 00:09:55.058 CC lib/util/strerror_tls.o 00:09:55.058 CC lib/util/string.o 00:09:55.058 CC lib/util/uuid.o 00:09:55.058 CC lib/util/fd_group.o 00:09:55.058 CC lib/util/xor.o 00:09:55.058 CC lib/util/zipf.o 00:09:55.058 CC lib/vfio_user/host/vfio_user_pci.o 00:09:55.058 CC lib/vfio_user/host/vfio_user.o 00:09:55.058 LIB libspdk_dma.a 00:09:55.058 SO libspdk_dma.so.4.0 00:09:55.058 SYMLINK libspdk_dma.so 00:09:55.058 LIB libspdk_ioat.a 00:09:55.058 LIB libspdk_vfio_user.a 00:09:55.058 SO libspdk_ioat.so.7.0 00:09:55.058 SO libspdk_vfio_user.so.5.0 00:09:55.058 SYMLINK libspdk_ioat.so 00:09:55.058 SYMLINK libspdk_vfio_user.so 00:09:55.058 LIB libspdk_util.a 00:09:55.058 SO libspdk_util.so.9.1 00:09:55.058 SYMLINK libspdk_util.so 00:09:55.317 CC lib/conf/conf.o 00:09:55.317 CC lib/env_dpdk/env.o 00:09:55.318 CC lib/env_dpdk/memory.o 00:09:55.318 CC lib/env_dpdk/pci.o 00:09:55.318 CC lib/env_dpdk/init.o 00:09:55.318 CC lib/env_dpdk/threads.o 00:09:55.318 CC lib/env_dpdk/pci_ioat.o 00:09:55.318 CC lib/idxd/idxd.o 00:09:55.318 CC lib/rdma_provider/common.o 00:09:55.318 CC lib/env_dpdk/pci_virtio.o 00:09:55.318 CC lib/rdma_provider/rdma_provider_verbs.o 00:09:55.318 CC lib/idxd/idxd_user.o 00:09:55.318 CC lib/idxd/idxd_kernel.o 00:09:55.318 CC lib/env_dpdk/pci_vmd.o 00:09:55.318 CC lib/env_dpdk/pci_idxd.o 00:09:55.318 CC lib/env_dpdk/pci_event.o 00:09:55.318 CC lib/env_dpdk/sigbus_handler.o 00:09:55.318 CC lib/vmd/vmd.o 00:09:55.318 CC lib/vmd/led.o 00:09:55.318 CC lib/env_dpdk/pci_dpdk.o 00:09:55.318 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:55.318 CC lib/rdma_utils/rdma_utils.o 00:09:55.318 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:55.318 CC lib/json/json_parse.o 00:09:55.318 CC lib/json/json_util.o 00:09:55.318 CC lib/json/json_write.o 00:09:55.318 LIB libspdk_trace_parser.a 00:09:55.318 SO libspdk_trace_parser.so.5.0 00:09:55.576 SYMLINK libspdk_trace_parser.so 00:09:55.576 LIB libspdk_rdma_provider.a 00:09:55.576 LIB libspdk_conf.a 00:09:55.576 SO libspdk_rdma_provider.so.6.0 00:09:55.576 SO libspdk_conf.so.6.0 00:09:55.576 LIB libspdk_json.a 00:09:55.576 SYMLINK libspdk_conf.so 00:09:55.576 SYMLINK libspdk_rdma_provider.so 00:09:55.576 LIB libspdk_rdma_utils.a 00:09:55.834 SO libspdk_json.so.6.0 00:09:55.834 SO libspdk_rdma_utils.so.1.0 00:09:55.834 SYMLINK libspdk_rdma_utils.so 00:09:55.834 SYMLINK libspdk_json.so 00:09:55.834 LIB libspdk_idxd.a 00:09:55.834 CC lib/jsonrpc/jsonrpc_server.o 00:09:55.834 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:55.834 CC lib/jsonrpc/jsonrpc_client.o 00:09:55.834 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:55.834 SO libspdk_idxd.so.12.0 00:09:56.092 SYMLINK libspdk_idxd.so 00:09:56.092 LIB libspdk_vmd.a 00:09:56.092 SO libspdk_vmd.so.6.0 00:09:56.092 SYMLINK libspdk_vmd.so 00:09:56.092 LIB libspdk_jsonrpc.a 00:09:56.351 SO libspdk_jsonrpc.so.6.0 00:09:56.351 SYMLINK libspdk_jsonrpc.so 00:09:56.609 CC lib/rpc/rpc.o 00:09:56.867 LIB libspdk_rpc.a 00:09:56.867 SO libspdk_rpc.so.6.0 00:09:56.867 SYMLINK libspdk_rpc.so 00:09:56.867 CC lib/keyring/keyring.o 00:09:56.867 CC lib/keyring/keyring_rpc.o 00:09:56.867 CC lib/trace/trace.o 00:09:56.867 CC lib/notify/notify.o 00:09:56.867 CC lib/trace/trace_flags.o 00:09:57.125 CC lib/notify/notify_rpc.o 00:09:57.125 CC lib/trace/trace_rpc.o 00:09:57.125 LIB libspdk_notify.a 00:09:57.125 SO libspdk_notify.so.6.0 00:09:57.125 LIB libspdk_trace.a 00:09:57.383 LIB libspdk_keyring.a 00:09:57.383 SYMLINK libspdk_notify.so 00:09:57.383 SO libspdk_trace.so.10.0 00:09:57.383 SO libspdk_keyring.so.1.0 00:09:57.383 LIB libspdk_env_dpdk.a 00:09:57.383 SYMLINK libspdk_trace.so 00:09:57.383 SYMLINK libspdk_keyring.so 00:09:57.383 SO libspdk_env_dpdk.so.14.1 00:09:57.383 CC lib/thread/thread.o 00:09:57.383 CC lib/thread/iobuf.o 00:09:57.383 CC lib/sock/sock.o 00:09:57.383 CC lib/sock/sock_rpc.o 00:09:57.641 SYMLINK libspdk_env_dpdk.so 00:09:57.900 LIB libspdk_sock.a 00:09:57.900 SO libspdk_sock.so.10.0 00:09:57.900 SYMLINK libspdk_sock.so 00:09:58.161 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:58.161 CC lib/nvme/nvme_ctrlr.o 00:09:58.161 CC lib/nvme/nvme_fabric.o 00:09:58.161 CC lib/nvme/nvme_ns_cmd.o 00:09:58.161 CC lib/nvme/nvme_ns.o 00:09:58.161 CC lib/nvme/nvme_pcie_common.o 00:09:58.161 CC lib/nvme/nvme_pcie.o 00:09:58.161 CC lib/nvme/nvme_qpair.o 00:09:58.161 CC lib/nvme/nvme.o 00:09:58.161 CC lib/nvme/nvme_quirks.o 00:09:58.161 CC lib/nvme/nvme_transport.o 00:09:58.161 CC lib/nvme/nvme_discovery.o 00:09:58.161 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:58.161 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:58.161 CC lib/nvme/nvme_tcp.o 00:09:58.161 CC lib/nvme/nvme_opal.o 00:09:58.161 CC lib/nvme/nvme_io_msg.o 00:09:58.161 CC lib/nvme/nvme_poll_group.o 00:09:58.161 CC lib/nvme/nvme_stubs.o 00:09:58.161 CC lib/nvme/nvme_zns.o 00:09:58.161 CC lib/nvme/nvme_auth.o 00:09:58.161 CC lib/nvme/nvme_cuse.o 00:09:58.161 CC lib/nvme/nvme_vfio_user.o 00:09:58.161 CC lib/nvme/nvme_rdma.o 00:09:59.536 LIB libspdk_thread.a 00:09:59.536 SO libspdk_thread.so.10.1 00:09:59.536 SYMLINK libspdk_thread.so 00:09:59.536 CC lib/init/json_config.o 00:09:59.536 CC lib/blob/blobstore.o 00:09:59.536 CC lib/accel/accel.o 00:09:59.536 CC lib/blob/request.o 00:09:59.536 CC lib/accel/accel_rpc.o 00:09:59.536 CC lib/init/subsystem.o 00:09:59.536 CC lib/virtio/virtio.o 00:09:59.536 CC lib/virtio/virtio_vhost_user.o 00:09:59.536 CC lib/init/subsystem_rpc.o 00:09:59.536 CC lib/blob/zeroes.o 00:09:59.536 CC lib/blob/blob_bs_dev.o 00:09:59.536 CC lib/virtio/virtio_vfio_user.o 00:09:59.536 CC lib/accel/accel_sw.o 00:09:59.536 CC lib/virtio/virtio_pci.o 00:09:59.536 CC lib/init/rpc.o 00:09:59.536 CC lib/vfu_tgt/tgt_endpoint.o 00:09:59.536 CC lib/vfu_tgt/tgt_rpc.o 00:09:59.794 LIB libspdk_init.a 00:09:59.794 SO libspdk_init.so.5.0 00:10:00.053 SYMLINK libspdk_init.so 00:10:00.053 LIB libspdk_vfu_tgt.a 00:10:00.053 SO libspdk_vfu_tgt.so.3.0 00:10:00.053 LIB libspdk_virtio.a 00:10:00.053 SO libspdk_virtio.so.7.0 00:10:00.053 SYMLINK libspdk_vfu_tgt.so 00:10:00.053 SYMLINK libspdk_virtio.so 00:10:00.053 CC lib/event/app.o 00:10:00.053 CC lib/event/reactor.o 00:10:00.053 CC lib/event/log_rpc.o 00:10:00.053 CC lib/event/app_rpc.o 00:10:00.053 CC lib/event/scheduler_static.o 00:10:00.620 LIB libspdk_event.a 00:10:00.620 SO libspdk_event.so.14.0 00:10:00.620 SYMLINK libspdk_event.so 00:10:00.620 LIB libspdk_accel.a 00:10:00.620 SO libspdk_accel.so.15.1 00:10:00.878 SYMLINK libspdk_accel.so 00:10:00.878 CC lib/bdev/bdev.o 00:10:00.878 CC lib/bdev/bdev_zone.o 00:10:00.878 CC lib/bdev/bdev_rpc.o 00:10:00.878 CC lib/bdev/part.o 00:10:00.878 CC lib/bdev/scsi_nvme.o 00:10:00.878 LIB libspdk_nvme.a 00:10:01.136 SO libspdk_nvme.so.13.1 00:10:01.395 SYMLINK libspdk_nvme.so 00:10:02.772 LIB libspdk_blob.a 00:10:02.772 SO libspdk_blob.so.11.0 00:10:02.772 SYMLINK libspdk_blob.so 00:10:02.772 CC lib/lvol/lvol.o 00:10:02.772 CC lib/blobfs/blobfs.o 00:10:02.772 CC lib/blobfs/tree.o 00:10:03.336 LIB libspdk_bdev.a 00:10:03.336 SO libspdk_bdev.so.15.1 00:10:03.599 SYMLINK libspdk_bdev.so 00:10:03.599 CC lib/nvmf/ctrlr.o 00:10:03.599 CC lib/nvmf/ctrlr_discovery.o 00:10:03.599 CC lib/ftl/ftl_core.o 00:10:03.599 CC lib/ftl/ftl_init.o 00:10:03.599 CC lib/nvmf/ctrlr_bdev.o 00:10:03.599 CC lib/nvmf/subsystem.o 00:10:03.599 CC lib/ftl/ftl_layout.o 00:10:03.599 CC lib/nvmf/nvmf.o 00:10:03.599 CC lib/scsi/dev.o 00:10:03.599 CC lib/ftl/ftl_debug.o 00:10:03.599 CC lib/nvmf/nvmf_rpc.o 00:10:03.599 CC lib/nvmf/transport.o 00:10:03.599 CC lib/scsi/lun.o 00:10:03.599 CC lib/ftl/ftl_io.o 00:10:03.599 CC lib/ftl/ftl_sb.o 00:10:03.599 CC lib/nvmf/tcp.o 00:10:03.599 CC lib/scsi/port.o 00:10:03.599 CC lib/ftl/ftl_l2p.o 00:10:03.599 CC lib/nvmf/stubs.o 00:10:03.599 CC lib/scsi/scsi.o 00:10:03.599 CC lib/ftl/ftl_l2p_flat.o 00:10:03.599 CC lib/nvmf/mdns_server.o 00:10:03.599 CC lib/scsi/scsi_bdev.o 00:10:03.599 CC lib/ftl/ftl_nv_cache.o 00:10:03.599 CC lib/scsi/scsi_pr.o 00:10:03.599 CC lib/ftl/ftl_band.o 00:10:03.599 CC lib/nbd/nbd.o 00:10:03.599 CC lib/nvmf/vfio_user.o 00:10:03.599 CC lib/nbd/nbd_rpc.o 00:10:03.599 CC lib/ublk/ublk.o 00:10:03.859 LIB libspdk_lvol.a 00:10:03.859 SO libspdk_lvol.so.10.0 00:10:03.859 LIB libspdk_blobfs.a 00:10:03.859 SO libspdk_blobfs.so.10.0 00:10:04.121 SYMLINK libspdk_lvol.so 00:10:04.121 CC lib/scsi/scsi_rpc.o 00:10:04.121 CC lib/ftl/ftl_band_ops.o 00:10:04.121 CC lib/nvmf/rdma.o 00:10:04.121 CC lib/ublk/ublk_rpc.o 00:10:04.121 SYMLINK libspdk_blobfs.so 00:10:04.121 CC lib/ftl/ftl_writer.o 00:10:04.121 CC lib/ftl/ftl_rq.o 00:10:04.121 CC lib/nvmf/auth.o 00:10:04.121 CC lib/ftl/ftl_reloc.o 00:10:04.121 CC lib/ftl/ftl_l2p_cache.o 00:10:04.121 CC lib/ftl/ftl_p2l.o 00:10:04.121 CC lib/scsi/task.o 00:10:04.393 CC lib/ftl/mngt/ftl_mngt.o 00:10:04.393 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:04.393 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:04.393 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:04.393 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:04.393 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:04.393 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:04.393 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:04.393 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:04.652 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:04.652 LIB libspdk_nbd.a 00:10:04.652 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:04.652 SO libspdk_nbd.so.7.0 00:10:04.652 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:04.652 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:04.652 CC lib/ftl/utils/ftl_conf.o 00:10:04.652 LIB libspdk_scsi.a 00:10:04.652 SYMLINK libspdk_nbd.so 00:10:04.652 CC lib/ftl/utils/ftl_md.o 00:10:04.916 CC lib/ftl/utils/ftl_mempool.o 00:10:04.916 CC lib/ftl/utils/ftl_bitmap.o 00:10:04.916 CC lib/ftl/utils/ftl_property.o 00:10:04.916 SO libspdk_scsi.so.9.0 00:10:04.916 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:04.916 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:04.916 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:04.916 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:04.916 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:04.916 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:04.916 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:10:04.916 LIB libspdk_ublk.a 00:10:04.916 SYMLINK libspdk_scsi.so 00:10:04.916 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:04.916 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:05.177 SO libspdk_ublk.so.3.0 00:10:05.177 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:05.177 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:05.177 CC lib/ftl/base/ftl_base_dev.o 00:10:05.177 CC lib/ftl/base/ftl_base_bdev.o 00:10:05.177 SYMLINK libspdk_ublk.so 00:10:05.177 CC lib/ftl/ftl_trace.o 00:10:05.437 CC lib/iscsi/conn.o 00:10:05.437 CC lib/iscsi/init_grp.o 00:10:05.437 CC lib/iscsi/iscsi.o 00:10:05.437 CC lib/iscsi/md5.o 00:10:05.437 CC lib/iscsi/param.o 00:10:05.437 CC lib/iscsi/portal_grp.o 00:10:05.437 CC lib/vhost/vhost.o 00:10:05.437 CC lib/iscsi/tgt_node.o 00:10:05.437 CC lib/iscsi/iscsi_subsystem.o 00:10:05.437 CC lib/vhost/vhost_rpc.o 00:10:05.437 CC lib/vhost/vhost_scsi.o 00:10:05.437 CC lib/iscsi/iscsi_rpc.o 00:10:05.437 CC lib/vhost/vhost_blk.o 00:10:05.437 CC lib/iscsi/task.o 00:10:05.437 CC lib/vhost/rte_vhost_user.o 00:10:05.696 LIB libspdk_ftl.a 00:10:05.954 SO libspdk_ftl.so.9.0 00:10:06.212 SYMLINK libspdk_ftl.so 00:10:06.779 LIB libspdk_vhost.a 00:10:06.779 SO libspdk_vhost.so.8.0 00:10:07.085 SYMLINK libspdk_vhost.so 00:10:07.085 LIB libspdk_nvmf.a 00:10:07.085 LIB libspdk_iscsi.a 00:10:07.085 SO libspdk_nvmf.so.18.1 00:10:07.085 SO libspdk_iscsi.so.8.0 00:10:07.365 SYMLINK libspdk_iscsi.so 00:10:07.365 SYMLINK libspdk_nvmf.so 00:10:07.623 CC module/vfu_device/vfu_virtio.o 00:10:07.623 CC module/vfu_device/vfu_virtio_blk.o 00:10:07.623 CC module/vfu_device/vfu_virtio_scsi.o 00:10:07.623 CC module/env_dpdk/env_dpdk_rpc.o 00:10:07.623 CC module/vfu_device/vfu_virtio_rpc.o 00:10:07.623 CC module/scheduler/gscheduler/gscheduler.o 00:10:07.623 CC module/keyring/file/keyring_rpc.o 00:10:07.623 CC module/keyring/file/keyring.o 00:10:07.623 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:07.623 CC module/blob/bdev/blob_bdev.o 00:10:07.623 CC module/accel/error/accel_error.o 00:10:07.623 CC module/accel/iaa/accel_iaa.o 00:10:07.623 CC module/sock/posix/posix.o 00:10:07.623 CC module/accel/error/accel_error_rpc.o 00:10:07.623 CC module/accel/dsa/accel_dsa.o 00:10:07.623 CC module/accel/ioat/accel_ioat.o 00:10:07.623 CC module/accel/iaa/accel_iaa_rpc.o 00:10:07.623 CC module/accel/dsa/accel_dsa_rpc.o 00:10:07.623 CC module/accel/ioat/accel_ioat_rpc.o 00:10:07.623 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:07.623 CC module/keyring/linux/keyring.o 00:10:07.623 CC module/keyring/linux/keyring_rpc.o 00:10:07.881 LIB libspdk_env_dpdk_rpc.a 00:10:07.881 SO libspdk_env_dpdk_rpc.so.6.0 00:10:07.881 LIB libspdk_scheduler_gscheduler.a 00:10:07.881 SO libspdk_scheduler_gscheduler.so.4.0 00:10:07.881 SYMLINK libspdk_env_dpdk_rpc.so 00:10:07.881 LIB libspdk_accel_error.a 00:10:07.881 SYMLINK libspdk_scheduler_gscheduler.so 00:10:07.881 LIB libspdk_keyring_file.a 00:10:07.881 LIB libspdk_accel_ioat.a 00:10:07.881 SO libspdk_accel_error.so.2.0 00:10:07.881 LIB libspdk_accel_iaa.a 00:10:07.881 SO libspdk_keyring_file.so.1.0 00:10:07.881 LIB libspdk_scheduler_dpdk_governor.a 00:10:07.881 SO libspdk_accel_ioat.so.6.0 00:10:07.881 SO libspdk_accel_iaa.so.3.0 00:10:07.881 LIB libspdk_keyring_linux.a 00:10:07.881 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:08.138 SO libspdk_keyring_linux.so.1.0 00:10:08.138 SYMLINK libspdk_accel_error.so 00:10:08.138 SYMLINK libspdk_keyring_file.so 00:10:08.138 LIB libspdk_scheduler_dynamic.a 00:10:08.138 SYMLINK libspdk_accel_iaa.so 00:10:08.138 SYMLINK libspdk_accel_ioat.so 00:10:08.138 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:08.138 SO libspdk_scheduler_dynamic.so.4.0 00:10:08.138 SYMLINK libspdk_keyring_linux.so 00:10:08.138 SYMLINK libspdk_scheduler_dynamic.so 00:10:08.138 LIB libspdk_accel_dsa.a 00:10:08.138 LIB libspdk_blob_bdev.a 00:10:08.138 SO libspdk_accel_dsa.so.5.0 00:10:08.138 SO libspdk_blob_bdev.so.11.0 00:10:08.138 SYMLINK libspdk_blob_bdev.so 00:10:08.138 SYMLINK libspdk_accel_dsa.so 00:10:08.403 LIB libspdk_vfu_device.a 00:10:08.403 SO libspdk_vfu_device.so.3.0 00:10:08.403 SYMLINK libspdk_vfu_device.so 00:10:08.403 CC module/blobfs/bdev/blobfs_bdev.o 00:10:08.403 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:08.403 CC module/bdev/delay/vbdev_delay.o 00:10:08.403 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:08.403 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:08.403 CC module/bdev/malloc/bdev_malloc.o 00:10:08.403 CC module/bdev/ftl/bdev_ftl.o 00:10:08.403 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:08.403 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:08.403 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:08.403 CC module/bdev/error/vbdev_error.o 00:10:08.403 CC module/bdev/null/bdev_null.o 00:10:08.403 CC module/bdev/nvme/bdev_nvme.o 00:10:08.403 CC module/bdev/null/bdev_null_rpc.o 00:10:08.403 CC module/bdev/error/vbdev_error_rpc.o 00:10:08.403 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:08.403 CC module/bdev/nvme/nvme_rpc.o 00:10:08.403 CC module/bdev/passthru/vbdev_passthru.o 00:10:08.403 CC module/bdev/nvme/bdev_mdns_client.o 00:10:08.403 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:08.403 CC module/bdev/gpt/gpt.o 00:10:08.403 CC module/bdev/nvme/vbdev_opal.o 00:10:08.403 CC module/bdev/gpt/vbdev_gpt.o 00:10:08.403 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:08.403 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:08.403 CC module/bdev/split/vbdev_split.o 00:10:08.403 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:08.403 CC module/bdev/lvol/vbdev_lvol.o 00:10:08.403 CC module/bdev/raid/bdev_raid.o 00:10:08.403 CC module/bdev/aio/bdev_aio.o 00:10:08.403 CC module/bdev/iscsi/bdev_iscsi.o 00:10:08.403 LIB libspdk_sock_posix.a 00:10:08.661 SO libspdk_sock_posix.so.6.0 00:10:08.661 SYMLINK libspdk_sock_posix.so 00:10:08.661 CC module/bdev/aio/bdev_aio_rpc.o 00:10:08.919 CC module/bdev/split/vbdev_split_rpc.o 00:10:08.919 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:08.919 CC module/bdev/raid/bdev_raid_rpc.o 00:10:08.919 LIB libspdk_blobfs_bdev.a 00:10:08.919 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:08.919 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:08.919 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:08.919 CC module/bdev/raid/bdev_raid_sb.o 00:10:08.919 CC module/bdev/raid/raid0.o 00:10:08.919 SO libspdk_blobfs_bdev.so.6.0 00:10:08.919 CC module/bdev/raid/raid1.o 00:10:08.919 CC module/bdev/raid/concat.o 00:10:08.919 SYMLINK libspdk_blobfs_bdev.so 00:10:08.919 LIB libspdk_bdev_error.a 00:10:08.919 LIB libspdk_bdev_gpt.a 00:10:09.176 LIB libspdk_bdev_ftl.a 00:10:09.176 SO libspdk_bdev_error.so.6.0 00:10:09.176 LIB libspdk_bdev_null.a 00:10:09.176 SO libspdk_bdev_gpt.so.6.0 00:10:09.176 SO libspdk_bdev_ftl.so.6.0 00:10:09.176 SO libspdk_bdev_null.so.6.0 00:10:09.176 LIB libspdk_bdev_passthru.a 00:10:09.176 LIB libspdk_bdev_split.a 00:10:09.176 LIB libspdk_bdev_aio.a 00:10:09.176 SO libspdk_bdev_passthru.so.6.0 00:10:09.176 SYMLINK libspdk_bdev_error.so 00:10:09.176 SO libspdk_bdev_split.so.6.0 00:10:09.176 LIB libspdk_bdev_delay.a 00:10:09.176 SO libspdk_bdev_aio.so.6.0 00:10:09.176 SYMLINK libspdk_bdev_ftl.so 00:10:09.176 SYMLINK libspdk_bdev_gpt.so 00:10:09.176 SO libspdk_bdev_delay.so.6.0 00:10:09.176 LIB libspdk_bdev_malloc.a 00:10:09.176 LIB libspdk_bdev_iscsi.a 00:10:09.176 SYMLINK libspdk_bdev_null.so 00:10:09.176 SYMLINK libspdk_bdev_passthru.so 00:10:09.176 LIB libspdk_bdev_zone_block.a 00:10:09.176 SYMLINK libspdk_bdev_split.so 00:10:09.176 SO libspdk_bdev_malloc.so.6.0 00:10:09.176 SYMLINK libspdk_bdev_aio.so 00:10:09.176 SO libspdk_bdev_iscsi.so.6.0 00:10:09.176 SO libspdk_bdev_zone_block.so.6.0 00:10:09.176 SYMLINK libspdk_bdev_delay.so 00:10:09.176 SYMLINK libspdk_bdev_malloc.so 00:10:09.176 SYMLINK libspdk_bdev_iscsi.so 00:10:09.433 SYMLINK libspdk_bdev_zone_block.so 00:10:09.433 LIB libspdk_bdev_virtio.a 00:10:09.433 SO libspdk_bdev_virtio.so.6.0 00:10:09.433 LIB libspdk_bdev_lvol.a 00:10:09.433 SYMLINK libspdk_bdev_virtio.so 00:10:09.433 SO libspdk_bdev_lvol.so.6.0 00:10:09.433 SYMLINK libspdk_bdev_lvol.so 00:10:09.690 LIB libspdk_bdev_raid.a 00:10:09.948 SO libspdk_bdev_raid.so.6.0 00:10:09.948 SYMLINK libspdk_bdev_raid.so 00:10:11.318 LIB libspdk_bdev_nvme.a 00:10:11.318 SO libspdk_bdev_nvme.so.7.0 00:10:11.318 SYMLINK libspdk_bdev_nvme.so 00:10:11.884 CC module/event/subsystems/vmd/vmd.o 00:10:11.884 CC module/event/subsystems/keyring/keyring.o 00:10:11.884 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:11.884 CC module/event/subsystems/sock/sock.o 00:10:11.884 CC module/event/subsystems/scheduler/scheduler.o 00:10:11.884 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:11.884 CC module/event/subsystems/iobuf/iobuf.o 00:10:11.884 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:10:11.884 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:11.884 LIB libspdk_event_keyring.a 00:10:11.884 LIB libspdk_event_vhost_blk.a 00:10:11.884 LIB libspdk_event_scheduler.a 00:10:11.884 LIB libspdk_event_vfu_tgt.a 00:10:11.884 LIB libspdk_event_vmd.a 00:10:11.884 LIB libspdk_event_sock.a 00:10:11.884 SO libspdk_event_keyring.so.1.0 00:10:11.884 LIB libspdk_event_iobuf.a 00:10:11.884 SO libspdk_event_vhost_blk.so.3.0 00:10:11.884 SO libspdk_event_vfu_tgt.so.3.0 00:10:11.884 SO libspdk_event_sock.so.5.0 00:10:11.884 SO libspdk_event_scheduler.so.4.0 00:10:11.884 SO libspdk_event_vmd.so.6.0 00:10:11.884 SO libspdk_event_iobuf.so.3.0 00:10:11.884 SYMLINK libspdk_event_keyring.so 00:10:11.884 SYMLINK libspdk_event_vhost_blk.so 00:10:12.142 SYMLINK libspdk_event_sock.so 00:10:12.142 SYMLINK libspdk_event_vfu_tgt.so 00:10:12.142 SYMLINK libspdk_event_scheduler.so 00:10:12.142 SYMLINK libspdk_event_vmd.so 00:10:12.142 SYMLINK libspdk_event_iobuf.so 00:10:12.142 CC module/event/subsystems/accel/accel.o 00:10:12.401 LIB libspdk_event_accel.a 00:10:12.401 SO libspdk_event_accel.so.6.0 00:10:12.401 SYMLINK libspdk_event_accel.so 00:10:12.658 CC module/event/subsystems/bdev/bdev.o 00:10:12.916 LIB libspdk_event_bdev.a 00:10:12.916 SO libspdk_event_bdev.so.6.0 00:10:12.916 SYMLINK libspdk_event_bdev.so 00:10:13.174 CC module/event/subsystems/ublk/ublk.o 00:10:13.174 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:13.174 CC module/event/subsystems/nbd/nbd.o 00:10:13.174 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:13.174 CC module/event/subsystems/scsi/scsi.o 00:10:13.174 LIB libspdk_event_nbd.a 00:10:13.174 LIB libspdk_event_ublk.a 00:10:13.434 LIB libspdk_event_scsi.a 00:10:13.434 SO libspdk_event_nbd.so.6.0 00:10:13.434 SO libspdk_event_ublk.so.3.0 00:10:13.434 SO libspdk_event_scsi.so.6.0 00:10:13.434 SYMLINK libspdk_event_nbd.so 00:10:13.434 SYMLINK libspdk_event_ublk.so 00:10:13.434 SYMLINK libspdk_event_scsi.so 00:10:13.434 LIB libspdk_event_nvmf.a 00:10:13.434 SO libspdk_event_nvmf.so.6.0 00:10:13.434 SYMLINK libspdk_event_nvmf.so 00:10:13.692 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:13.692 CC module/event/subsystems/iscsi/iscsi.o 00:10:13.692 LIB libspdk_event_vhost_scsi.a 00:10:13.692 SO libspdk_event_vhost_scsi.so.3.0 00:10:13.692 LIB libspdk_event_iscsi.a 00:10:13.692 SO libspdk_event_iscsi.so.6.0 00:10:13.949 SYMLINK libspdk_event_vhost_scsi.so 00:10:13.949 SYMLINK libspdk_event_iscsi.so 00:10:13.949 SO libspdk.so.6.0 00:10:13.949 SYMLINK libspdk.so 00:10:14.211 CC app/trace_record/trace_record.o 00:10:14.211 CC test/rpc_client/rpc_client_test.o 00:10:14.211 CXX app/trace/trace.o 00:10:14.211 CC app/spdk_nvme_perf/perf.o 00:10:14.211 CC app/spdk_top/spdk_top.o 00:10:14.211 CC app/spdk_lspci/spdk_lspci.o 00:10:14.211 CC app/spdk_nvme_identify/identify.o 00:10:14.211 CC app/spdk_nvme_discover/discovery_aer.o 00:10:14.211 TEST_HEADER include/spdk/accel.h 00:10:14.211 TEST_HEADER include/spdk/accel_module.h 00:10:14.211 TEST_HEADER include/spdk/assert.h 00:10:14.211 TEST_HEADER include/spdk/barrier.h 00:10:14.211 TEST_HEADER include/spdk/base64.h 00:10:14.211 TEST_HEADER include/spdk/bdev.h 00:10:14.211 TEST_HEADER include/spdk/bdev_module.h 00:10:14.211 TEST_HEADER include/spdk/bdev_zone.h 00:10:14.211 TEST_HEADER include/spdk/bit_array.h 00:10:14.211 TEST_HEADER include/spdk/bit_pool.h 00:10:14.211 TEST_HEADER include/spdk/blob_bdev.h 00:10:14.211 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:14.211 TEST_HEADER include/spdk/blobfs.h 00:10:14.211 TEST_HEADER include/spdk/blob.h 00:10:14.211 TEST_HEADER include/spdk/conf.h 00:10:14.211 TEST_HEADER include/spdk/config.h 00:10:14.211 TEST_HEADER include/spdk/cpuset.h 00:10:14.211 TEST_HEADER include/spdk/crc16.h 00:10:14.211 TEST_HEADER include/spdk/crc32.h 00:10:14.211 TEST_HEADER include/spdk/crc64.h 00:10:14.211 TEST_HEADER include/spdk/dif.h 00:10:14.211 TEST_HEADER include/spdk/dma.h 00:10:14.211 TEST_HEADER include/spdk/endian.h 00:10:14.211 TEST_HEADER include/spdk/env_dpdk.h 00:10:14.211 TEST_HEADER include/spdk/env.h 00:10:14.211 TEST_HEADER include/spdk/fd_group.h 00:10:14.211 TEST_HEADER include/spdk/event.h 00:10:14.211 TEST_HEADER include/spdk/fd.h 00:10:14.211 TEST_HEADER include/spdk/file.h 00:10:14.211 TEST_HEADER include/spdk/ftl.h 00:10:14.211 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:14.211 TEST_HEADER include/spdk/gpt_spec.h 00:10:14.211 CC app/spdk_dd/spdk_dd.o 00:10:14.211 TEST_HEADER include/spdk/hexlify.h 00:10:14.211 TEST_HEADER include/spdk/histogram_data.h 00:10:14.211 TEST_HEADER include/spdk/idxd.h 00:10:14.211 TEST_HEADER include/spdk/idxd_spec.h 00:10:14.211 TEST_HEADER include/spdk/init.h 00:10:14.211 TEST_HEADER include/spdk/ioat.h 00:10:14.211 TEST_HEADER include/spdk/ioat_spec.h 00:10:14.211 CC app/nvmf_tgt/nvmf_main.o 00:10:14.211 TEST_HEADER include/spdk/iscsi_spec.h 00:10:14.211 TEST_HEADER include/spdk/json.h 00:10:14.211 TEST_HEADER include/spdk/jsonrpc.h 00:10:14.211 TEST_HEADER include/spdk/keyring.h 00:10:14.211 TEST_HEADER include/spdk/keyring_module.h 00:10:14.211 CC app/iscsi_tgt/iscsi_tgt.o 00:10:14.211 TEST_HEADER include/spdk/likely.h 00:10:14.211 TEST_HEADER include/spdk/log.h 00:10:14.211 TEST_HEADER include/spdk/lvol.h 00:10:14.211 TEST_HEADER include/spdk/memory.h 00:10:14.211 TEST_HEADER include/spdk/mmio.h 00:10:14.211 TEST_HEADER include/spdk/nbd.h 00:10:14.211 TEST_HEADER include/spdk/notify.h 00:10:14.211 TEST_HEADER include/spdk/nvme.h 00:10:14.211 TEST_HEADER include/spdk/nvme_intel.h 00:10:14.211 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:14.481 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:14.481 CC test/app/jsoncat/jsoncat.o 00:10:14.481 TEST_HEADER include/spdk/nvme_spec.h 00:10:14.481 TEST_HEADER include/spdk/nvme_zns.h 00:10:14.481 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:14.481 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:14.481 CC test/app/histogram_perf/histogram_perf.o 00:10:14.481 TEST_HEADER include/spdk/nvmf.h 00:10:14.481 CC test/app/stub/stub.o 00:10:14.481 CC test/thread/poller_perf/poller_perf.o 00:10:14.481 CC examples/util/zipf/zipf.o 00:10:14.481 CC examples/ioat/verify/verify.o 00:10:14.481 TEST_HEADER include/spdk/nvmf_spec.h 00:10:14.481 CC examples/ioat/perf/perf.o 00:10:14.481 TEST_HEADER include/spdk/nvmf_transport.h 00:10:14.481 TEST_HEADER include/spdk/opal.h 00:10:14.481 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:14.481 TEST_HEADER include/spdk/opal_spec.h 00:10:14.481 TEST_HEADER include/spdk/pci_ids.h 00:10:14.481 CC app/fio/nvme/fio_plugin.o 00:10:14.481 CC test/env/pci/pci_ut.o 00:10:14.481 TEST_HEADER include/spdk/pipe.h 00:10:14.481 CC app/spdk_tgt/spdk_tgt.o 00:10:14.481 CC test/env/vtophys/vtophys.o 00:10:14.481 TEST_HEADER include/spdk/queue.h 00:10:14.481 TEST_HEADER include/spdk/reduce.h 00:10:14.481 TEST_HEADER include/spdk/rpc.h 00:10:14.481 TEST_HEADER include/spdk/scheduler.h 00:10:14.481 CC test/env/memory/memory_ut.o 00:10:14.481 TEST_HEADER include/spdk/scsi.h 00:10:14.481 TEST_HEADER include/spdk/scsi_spec.h 00:10:14.481 TEST_HEADER include/spdk/sock.h 00:10:14.481 TEST_HEADER include/spdk/stdinc.h 00:10:14.481 TEST_HEADER include/spdk/string.h 00:10:14.481 TEST_HEADER include/spdk/thread.h 00:10:14.481 TEST_HEADER include/spdk/trace.h 00:10:14.481 TEST_HEADER include/spdk/trace_parser.h 00:10:14.481 TEST_HEADER include/spdk/tree.h 00:10:14.481 TEST_HEADER include/spdk/ublk.h 00:10:14.481 TEST_HEADER include/spdk/util.h 00:10:14.481 CC test/dma/test_dma/test_dma.o 00:10:14.481 CC app/fio/bdev/fio_plugin.o 00:10:14.481 TEST_HEADER include/spdk/uuid.h 00:10:14.481 CC test/app/bdev_svc/bdev_svc.o 00:10:14.481 TEST_HEADER include/spdk/version.h 00:10:14.481 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:14.481 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:14.481 TEST_HEADER include/spdk/vhost.h 00:10:14.481 TEST_HEADER include/spdk/vmd.h 00:10:14.481 TEST_HEADER include/spdk/xor.h 00:10:14.481 TEST_HEADER include/spdk/zipf.h 00:10:14.481 CXX test/cpp_headers/accel.o 00:10:14.481 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:14.481 LINK spdk_lspci 00:10:14.481 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:14.481 CC test/env/mem_callbacks/mem_callbacks.o 00:10:14.744 LINK rpc_client_test 00:10:14.744 LINK histogram_perf 00:10:14.744 LINK poller_perf 00:10:14.744 LINK jsoncat 00:10:14.744 LINK spdk_nvme_discover 00:10:14.744 LINK zipf 00:10:14.744 LINK interrupt_tgt 00:10:14.744 LINK nvmf_tgt 00:10:14.744 LINK env_dpdk_post_init 00:10:14.744 LINK spdk_trace_record 00:10:14.744 LINK vtophys 00:10:14.744 LINK iscsi_tgt 00:10:14.744 LINK stub 00:10:14.744 LINK ioat_perf 00:10:14.744 LINK spdk_tgt 00:10:14.744 LINK bdev_svc 00:10:14.744 LINK verify 00:10:15.007 CXX test/cpp_headers/accel_module.o 00:10:15.007 CXX test/cpp_headers/assert.o 00:10:15.007 CXX test/cpp_headers/barrier.o 00:10:15.007 CXX test/cpp_headers/base64.o 00:10:15.007 CXX test/cpp_headers/bdev.o 00:10:15.007 CXX test/cpp_headers/bdev_module.o 00:10:15.007 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:15.007 LINK mem_callbacks 00:10:15.007 CXX test/cpp_headers/bdev_zone.o 00:10:15.007 LINK spdk_dd 00:10:15.007 CXX test/cpp_headers/bit_array.o 00:10:15.007 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:15.007 CXX test/cpp_headers/bit_pool.o 00:10:15.007 LINK spdk_trace 00:10:15.007 CXX test/cpp_headers/blob_bdev.o 00:10:15.007 CXX test/cpp_headers/blobfs_bdev.o 00:10:15.268 LINK pci_ut 00:10:15.268 CXX test/cpp_headers/blobfs.o 00:10:15.268 LINK test_dma 00:10:15.268 CC test/event/event_perf/event_perf.o 00:10:15.268 LINK nvme_fuzz 00:10:15.268 CXX test/cpp_headers/blob.o 00:10:15.268 CXX test/cpp_headers/conf.o 00:10:15.268 CXX test/cpp_headers/config.o 00:10:15.268 CC test/event/reactor/reactor.o 00:10:15.268 CXX test/cpp_headers/cpuset.o 00:10:15.268 CC examples/sock/hello_world/hello_sock.o 00:10:15.530 CXX test/cpp_headers/crc16.o 00:10:15.530 CXX test/cpp_headers/crc32.o 00:10:15.530 CXX test/cpp_headers/crc64.o 00:10:15.530 CC examples/vmd/lsvmd/lsvmd.o 00:10:15.530 CC test/event/reactor_perf/reactor_perf.o 00:10:15.530 CC examples/vmd/led/led.o 00:10:15.530 CC examples/thread/thread/thread_ex.o 00:10:15.530 CXX test/cpp_headers/dif.o 00:10:15.530 CXX test/cpp_headers/dma.o 00:10:15.530 LINK spdk_bdev 00:10:15.530 LINK spdk_nvme 00:10:15.530 CXX test/cpp_headers/endian.o 00:10:15.530 CC test/event/app_repeat/app_repeat.o 00:10:15.530 CC examples/idxd/perf/perf.o 00:10:15.530 CXX test/cpp_headers/env_dpdk.o 00:10:15.530 LINK event_perf 00:10:15.530 CC test/event/scheduler/scheduler.o 00:10:15.789 CXX test/cpp_headers/env.o 00:10:15.789 LINK reactor 00:10:15.789 LINK lsvmd 00:10:15.789 CXX test/cpp_headers/event.o 00:10:15.789 CXX test/cpp_headers/fd_group.o 00:10:15.789 LINK spdk_nvme_perf 00:10:15.789 LINK led 00:10:15.789 LINK reactor_perf 00:10:15.789 LINK vhost_fuzz 00:10:15.789 LINK spdk_nvme_identify 00:10:15.789 CXX test/cpp_headers/fd.o 00:10:15.789 CXX test/cpp_headers/file.o 00:10:15.789 CXX test/cpp_headers/ftl.o 00:10:15.789 CXX test/cpp_headers/gpt_spec.o 00:10:15.789 CXX test/cpp_headers/hexlify.o 00:10:15.789 CC app/vhost/vhost.o 00:10:15.789 CXX test/cpp_headers/histogram_data.o 00:10:15.789 LINK memory_ut 00:10:15.789 LINK hello_sock 00:10:15.789 LINK app_repeat 00:10:16.049 CC test/accel/dif/dif.o 00:10:16.049 CC test/blobfs/mkfs/mkfs.o 00:10:16.049 CC test/nvme/aer/aer.o 00:10:16.049 CXX test/cpp_headers/idxd.o 00:10:16.049 LINK spdk_top 00:10:16.049 LINK thread 00:10:16.049 CC test/nvme/reset/reset.o 00:10:16.049 CXX test/cpp_headers/idxd_spec.o 00:10:16.049 CXX test/cpp_headers/init.o 00:10:16.049 CXX test/cpp_headers/ioat.o 00:10:16.049 CXX test/cpp_headers/ioat_spec.o 00:10:16.049 CC test/nvme/sgl/sgl.o 00:10:16.049 CXX test/cpp_headers/iscsi_spec.o 00:10:16.049 CC test/lvol/esnap/esnap.o 00:10:16.049 LINK scheduler 00:10:16.049 CC test/nvme/e2edp/nvme_dp.o 00:10:16.049 CC test/nvme/overhead/overhead.o 00:10:16.309 CC test/nvme/err_injection/err_injection.o 00:10:16.309 CXX test/cpp_headers/json.o 00:10:16.309 CXX test/cpp_headers/jsonrpc.o 00:10:16.309 CC test/nvme/startup/startup.o 00:10:16.309 LINK vhost 00:10:16.309 CXX test/cpp_headers/keyring.o 00:10:16.309 LINK idxd_perf 00:10:16.309 CXX test/cpp_headers/keyring_module.o 00:10:16.309 CC test/nvme/simple_copy/simple_copy.o 00:10:16.309 CC test/nvme/reserve/reserve.o 00:10:16.309 CC test/nvme/connect_stress/connect_stress.o 00:10:16.309 CC test/nvme/boot_partition/boot_partition.o 00:10:16.309 CC test/nvme/compliance/nvme_compliance.o 00:10:16.309 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:16.309 CC test/nvme/fused_ordering/fused_ordering.o 00:10:16.309 CXX test/cpp_headers/likely.o 00:10:16.309 LINK mkfs 00:10:16.309 CXX test/cpp_headers/log.o 00:10:16.573 CXX test/cpp_headers/lvol.o 00:10:16.573 LINK aer 00:10:16.573 CXX test/cpp_headers/memory.o 00:10:16.573 LINK startup 00:10:16.573 CC test/nvme/fdp/fdp.o 00:10:16.573 CC test/nvme/cuse/cuse.o 00:10:16.573 CXX test/cpp_headers/mmio.o 00:10:16.573 LINK reset 00:10:16.573 LINK err_injection 00:10:16.573 LINK sgl 00:10:16.573 CXX test/cpp_headers/nbd.o 00:10:16.573 LINK connect_stress 00:10:16.573 LINK nvme_dp 00:10:16.837 CC examples/nvme/reconnect/reconnect.o 00:10:16.837 CC examples/nvme/hello_world/hello_world.o 00:10:16.837 LINK boot_partition 00:10:16.837 CC examples/accel/perf/accel_perf.o 00:10:16.837 LINK reserve 00:10:16.837 CXX test/cpp_headers/notify.o 00:10:16.837 LINK overhead 00:10:16.837 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:16.837 LINK dif 00:10:16.837 LINK fused_ordering 00:10:16.837 LINK doorbell_aers 00:10:16.837 CC examples/nvme/arbitration/arbitration.o 00:10:16.837 LINK simple_copy 00:10:16.837 CC examples/nvme/hotplug/hotplug.o 00:10:16.837 CC examples/blob/hello_world/hello_blob.o 00:10:16.837 CXX test/cpp_headers/nvme.o 00:10:16.837 CXX test/cpp_headers/nvme_intel.o 00:10:16.837 CXX test/cpp_headers/nvme_ocssd.o 00:10:16.837 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:16.837 CXX test/cpp_headers/nvme_spec.o 00:10:16.837 CXX test/cpp_headers/nvme_zns.o 00:10:17.098 CC examples/blob/cli/blobcli.o 00:10:17.098 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:17.098 LINK nvme_compliance 00:10:17.098 CC examples/nvme/abort/abort.o 00:10:17.098 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:17.098 CXX test/cpp_headers/nvmf_cmd.o 00:10:17.098 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:17.098 CXX test/cpp_headers/nvmf.o 00:10:17.098 CXX test/cpp_headers/nvmf_spec.o 00:10:17.098 CXX test/cpp_headers/nvmf_transport.o 00:10:17.098 CXX test/cpp_headers/opal.o 00:10:17.098 CXX test/cpp_headers/opal_spec.o 00:10:17.098 CXX test/cpp_headers/pci_ids.o 00:10:17.098 CXX test/cpp_headers/pipe.o 00:10:17.098 LINK hello_world 00:10:17.361 LINK fdp 00:10:17.361 CXX test/cpp_headers/queue.o 00:10:17.361 CXX test/cpp_headers/reduce.o 00:10:17.361 CXX test/cpp_headers/rpc.o 00:10:17.361 CXX test/cpp_headers/scheduler.o 00:10:17.361 CXX test/cpp_headers/scsi.o 00:10:17.361 LINK cmb_copy 00:10:17.361 CXX test/cpp_headers/scsi_spec.o 00:10:17.361 LINK pmr_persistence 00:10:17.361 LINK hotplug 00:10:17.361 CXX test/cpp_headers/sock.o 00:10:17.361 CXX test/cpp_headers/stdinc.o 00:10:17.361 LINK reconnect 00:10:17.361 LINK hello_blob 00:10:17.361 LINK arbitration 00:10:17.361 CXX test/cpp_headers/string.o 00:10:17.361 CXX test/cpp_headers/thread.o 00:10:17.361 CXX test/cpp_headers/trace.o 00:10:17.361 CXX test/cpp_headers/trace_parser.o 00:10:17.623 CXX test/cpp_headers/tree.o 00:10:17.623 CXX test/cpp_headers/ublk.o 00:10:17.623 CXX test/cpp_headers/util.o 00:10:17.623 CXX test/cpp_headers/uuid.o 00:10:17.623 CXX test/cpp_headers/version.o 00:10:17.623 CXX test/cpp_headers/vfio_user_pci.o 00:10:17.623 CXX test/cpp_headers/vfio_user_spec.o 00:10:17.623 CXX test/cpp_headers/vhost.o 00:10:17.623 CXX test/cpp_headers/vmd.o 00:10:17.623 CC test/bdev/bdevio/bdevio.o 00:10:17.623 CXX test/cpp_headers/xor.o 00:10:17.623 CXX test/cpp_headers/zipf.o 00:10:17.623 LINK accel_perf 00:10:17.881 LINK abort 00:10:17.881 LINK nvme_manage 00:10:17.881 LINK blobcli 00:10:17.881 LINK iscsi_fuzz 00:10:18.139 CC examples/bdev/hello_world/hello_bdev.o 00:10:18.139 CC examples/bdev/bdevperf/bdevperf.o 00:10:18.139 LINK bdevio 00:10:18.397 LINK hello_bdev 00:10:18.397 LINK cuse 00:10:18.964 LINK bdevperf 00:10:19.531 CC examples/nvmf/nvmf/nvmf.o 00:10:19.788 LINK nvmf 00:10:21.690 LINK esnap 00:10:21.949 00:10:21.949 real 0m46.540s 00:10:21.949 user 8m17.939s 00:10:21.949 sys 1m44.683s 00:10:21.949 02:16:12 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:10:21.949 02:16:12 make -- common/autotest_common.sh@10 -- $ set +x 00:10:21.949 ************************************ 00:10:21.949 END TEST make 00:10:21.949 ************************************ 00:10:21.949 02:16:12 -- common/autotest_common.sh@1142 -- $ return 0 00:10:21.949 02:16:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:21.949 02:16:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:21.949 02:16:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:21.949 02:16:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.949 02:16:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:10:21.949 02:16:12 -- pm/common@44 -- $ pid=1646442 00:10:21.949 02:16:12 -- pm/common@50 -- $ kill -TERM 1646442 00:10:21.949 02:16:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.949 02:16:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:10:21.949 02:16:12 -- pm/common@44 -- $ pid=1646444 00:10:21.949 02:16:12 -- pm/common@50 -- $ kill -TERM 1646444 00:10:21.949 02:16:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.949 02:16:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:10:21.949 02:16:12 -- pm/common@44 -- $ pid=1646446 00:10:21.949 02:16:12 -- pm/common@50 -- $ kill -TERM 1646446 00:10:21.949 02:16:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:21.949 02:16:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:10:21.949 02:16:12 -- pm/common@44 -- $ pid=1646476 00:10:21.949 02:16:12 -- pm/common@50 -- $ sudo -E kill -TERM 1646476 00:10:22.207 02:16:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.207 02:16:12 -- nvmf/common.sh@7 -- # uname -s 00:10:22.207 02:16:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.207 02:16:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.207 02:16:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.207 02:16:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.207 02:16:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.207 02:16:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.207 02:16:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.207 02:16:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.207 02:16:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.207 02:16:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.207 02:16:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:22.207 02:16:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:22.207 02:16:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.207 02:16:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.207 02:16:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.207 02:16:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.207 02:16:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.207 02:16:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.207 02:16:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.207 02:16:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.207 02:16:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.207 02:16:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.207 02:16:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.207 02:16:12 -- paths/export.sh@5 -- # export PATH 00:10:22.207 02:16:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.207 02:16:12 -- nvmf/common.sh@47 -- # : 0 00:10:22.207 02:16:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.207 02:16:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.207 02:16:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.207 02:16:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.207 02:16:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.207 02:16:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.207 02:16:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.207 02:16:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.207 02:16:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:22.207 02:16:12 -- spdk/autotest.sh@32 -- # uname -s 00:10:22.207 02:16:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:22.207 02:16:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:22.207 02:16:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:10:22.207 02:16:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:10:22.207 02:16:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:10:22.207 02:16:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:22.207 02:16:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:22.207 02:16:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:22.207 02:16:12 -- spdk/autotest.sh@48 -- # udevadm_pid=1720360 00:10:22.207 02:16:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:22.207 02:16:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:22.207 02:16:12 -- pm/common@17 -- # local monitor 00:10:22.207 02:16:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:22.207 02:16:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:22.207 02:16:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:22.207 02:16:12 -- pm/common@21 -- # date +%s 00:10:22.207 02:16:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:22.207 02:16:12 -- pm/common@21 -- # date +%s 00:10:22.207 02:16:12 -- pm/common@25 -- # sleep 1 00:10:22.207 02:16:12 -- pm/common@21 -- # date +%s 00:10:22.207 02:16:12 -- pm/common@21 -- # date +%s 00:10:22.207 02:16:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656972 00:10:22.207 02:16:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656972 00:10:22.208 02:16:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656972 00:10:22.208 02:16:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720656972 00:10:22.208 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720656972_collect-vmstat.pm.log 00:10:22.208 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720656972_collect-cpu-load.pm.log 00:10:22.208 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720656972_collect-cpu-temp.pm.log 00:10:22.208 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720656972_collect-bmc-pm.bmc.pm.log 00:10:23.143 02:16:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:23.143 02:16:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:23.143 02:16:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:23.143 02:16:13 -- common/autotest_common.sh@10 -- # set +x 00:10:23.143 02:16:13 -- spdk/autotest.sh@59 -- # create_test_list 00:10:23.143 02:16:13 -- common/autotest_common.sh@746 -- # xtrace_disable 00:10:23.143 02:16:13 -- common/autotest_common.sh@10 -- # set +x 00:10:23.143 02:16:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:10:23.143 02:16:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:23.143 02:16:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:23.143 02:16:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:10:23.143 02:16:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:23.143 02:16:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:23.143 02:16:13 -- common/autotest_common.sh@1455 -- # uname 00:10:23.143 02:16:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:10:23.144 02:16:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:23.144 02:16:13 -- common/autotest_common.sh@1475 -- # uname 00:10:23.144 02:16:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:10:23.144 02:16:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:10:23.144 02:16:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:10:23.144 02:16:13 -- spdk/autotest.sh@72 -- # hash lcov 00:10:23.144 02:16:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:10:23.144 02:16:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:10:23.144 --rc lcov_branch_coverage=1 00:10:23.144 --rc lcov_function_coverage=1 00:10:23.144 --rc genhtml_branch_coverage=1 00:10:23.144 --rc genhtml_function_coverage=1 00:10:23.144 --rc genhtml_legend=1 00:10:23.144 --rc geninfo_all_blocks=1 00:10:23.144 ' 00:10:23.144 02:16:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:10:23.144 --rc lcov_branch_coverage=1 00:10:23.144 --rc lcov_function_coverage=1 00:10:23.144 --rc genhtml_branch_coverage=1 00:10:23.144 --rc genhtml_function_coverage=1 00:10:23.144 --rc genhtml_legend=1 00:10:23.144 --rc geninfo_all_blocks=1 00:10:23.144 ' 00:10:23.144 02:16:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:10:23.144 --rc lcov_branch_coverage=1 00:10:23.144 --rc lcov_function_coverage=1 00:10:23.144 --rc genhtml_branch_coverage=1 00:10:23.144 --rc genhtml_function_coverage=1 00:10:23.144 --rc genhtml_legend=1 00:10:23.144 --rc geninfo_all_blocks=1 00:10:23.144 --no-external' 00:10:23.144 02:16:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:10:23.144 --rc lcov_branch_coverage=1 00:10:23.144 --rc lcov_function_coverage=1 00:10:23.144 --rc genhtml_branch_coverage=1 00:10:23.144 --rc genhtml_function_coverage=1 00:10:23.144 --rc genhtml_legend=1 00:10:23.144 --rc geninfo_all_blocks=1 00:10:23.144 --no-external' 00:10:23.144 02:16:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:10:23.401 lcov: LCOV version 1.14 00:10:23.401 02:16:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:10:38.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:38.292 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:10:53.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:10:53.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:10:53.156 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:10:53.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:10:53.157 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:10:57.344 02:16:47 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:10:57.344 02:16:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:57.344 02:16:47 -- common/autotest_common.sh@10 -- # set +x 00:10:57.344 02:16:47 -- spdk/autotest.sh@91 -- # rm -f 00:10:57.344 02:16:47 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:10:58.282 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:10:58.282 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:10:58.282 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:10:58.282 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:10:58.541 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:10:58.541 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:10:58.541 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:10:58.541 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:10:58.541 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:10:58.541 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:10:58.541 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:10:58.541 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:10:58.541 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:10:58.541 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:10:58.541 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:10:58.541 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:10:58.541 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:10:58.541 02:16:48 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:10:58.541 02:16:48 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:10:58.541 02:16:48 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:10:58.541 02:16:48 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:10:58.541 02:16:48 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:10:58.541 02:16:48 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:10:58.541 02:16:48 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:10:58.541 02:16:48 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:58.541 02:16:48 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:10:58.541 02:16:48 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:10:58.541 02:16:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:58.541 02:16:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:58.541 02:16:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:10:58.541 02:16:48 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:10:58.541 02:16:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:58.800 No valid GPT data, bailing 00:10:58.800 02:16:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:58.800 02:16:48 -- scripts/common.sh@391 -- # pt= 00:10:58.800 02:16:48 -- scripts/common.sh@392 -- # return 1 00:10:58.800 02:16:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:58.800 1+0 records in 00:10:58.800 1+0 records out 00:10:58.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00230164 s, 456 MB/s 00:10:58.800 02:16:48 -- spdk/autotest.sh@118 -- # sync 00:10:58.800 02:16:48 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:58.800 02:16:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:58.800 02:16:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:00.176 02:16:50 -- spdk/autotest.sh@124 -- # uname -s 00:11:00.176 02:16:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:11:00.176 02:16:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:11:00.176 02:16:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:00.177 02:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.177 02:16:50 -- common/autotest_common.sh@10 -- # set +x 00:11:00.177 ************************************ 00:11:00.177 START TEST setup.sh 00:11:00.177 ************************************ 00:11:00.177 02:16:50 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:11:00.177 * Looking for test storage... 00:11:00.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:11:00.177 02:16:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:11:00.177 02:16:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:11:00.177 02:16:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:11:00.177 02:16:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:00.177 02:16:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.177 02:16:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:00.177 ************************************ 00:11:00.177 START TEST acl 00:11:00.177 ************************************ 00:11:00.177 02:16:50 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:11:00.435 * Looking for test storage... 00:11:00.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:11:00.435 02:16:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:11:00.435 02:16:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:11:00.435 02:16:50 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:11:00.435 02:16:50 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:11:00.435 02:16:50 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:00.435 02:16:50 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:11:00.435 02:16:50 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:11:00.435 02:16:50 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:00.435 02:16:50 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:00.435 02:16:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:11:00.435 02:16:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:11:00.435 02:16:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:11:00.435 02:16:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:11:00.435 02:16:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:11:00.435 02:16:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:00.435 02:16:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:01.811 02:16:51 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:11:01.811 02:16:51 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:11:01.811 02:16:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:01.811 02:16:51 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:11:01.811 02:16:51 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:11:01.811 02:16:51 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:11:02.747 Hugepages 00:11:02.747 node hugesize free / total 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.747 00:11:02.747 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:11:02.747 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:02.748 02:16:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:11:02.748 02:16:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:02.748 02:16:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:11:02.748 02:16:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:02.748 02:16:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:02.748 02:16:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:02.748 02:16:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:11:02.748 02:16:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:11:02.748 02:16:53 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:02.748 02:16:53 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.748 02:16:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:02.748 ************************************ 00:11:02.748 START TEST denied 00:11:02.748 ************************************ 00:11:02.748 02:16:53 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:11:02.748 02:16:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:11:02.748 02:16:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:11:02.748 02:16:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:11:02.748 02:16:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:11:02.748 02:16:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:04.135 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:04.135 02:16:54 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:06.039 00:11:06.039 real 0m3.334s 00:11:06.039 user 0m0.977s 00:11:06.039 sys 0m1.593s 00:11:06.039 02:16:56 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.039 02:16:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:11:06.039 ************************************ 00:11:06.039 END TEST denied 00:11:06.039 ************************************ 00:11:06.039 02:16:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:11:06.039 02:16:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:11:06.039 02:16:56 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:06.039 02:16:56 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.039 02:16:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:06.299 ************************************ 00:11:06.299 START TEST allowed 00:11:06.299 ************************************ 00:11:06.299 02:16:56 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:11:06.299 02:16:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:11:06.299 02:16:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:11:06.299 02:16:56 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:11:06.299 02:16:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:11:06.299 02:16:56 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:08.206 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:11:08.206 02:16:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:11:08.206 02:16:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:11:08.206 02:16:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:11:08.206 02:16:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:08.206 02:16:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:09.586 00:11:09.586 real 0m3.350s 00:11:09.586 user 0m0.917s 00:11:09.586 sys 0m1.422s 00:11:09.586 02:16:59 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.586 02:16:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:11:09.586 ************************************ 00:11:09.586 END TEST allowed 00:11:09.586 ************************************ 00:11:09.586 02:16:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:11:09.586 00:11:09.586 real 0m9.268s 00:11:09.586 user 0m2.995s 00:11:09.586 sys 0m4.619s 00:11:09.586 02:16:59 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.586 02:16:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:09.586 ************************************ 00:11:09.586 END TEST acl 00:11:09.586 ************************************ 00:11:09.586 02:16:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:11:09.586 02:16:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:11:09.586 02:16:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:09.586 02:16:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.586 02:16:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:09.586 ************************************ 00:11:09.586 START TEST hugepages 00:11:09.586 ************************************ 00:11:09.586 02:16:59 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:11:09.586 * Looking for test storage... 00:11:09.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 30280140 kB' 'MemAvailable: 33868016 kB' 'Buffers: 2704 kB' 'Cached: 15758728 kB' 'SwapCached: 0 kB' 'Active: 12735852 kB' 'Inactive: 3522068 kB' 'Active(anon): 12294308 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499612 kB' 'Mapped: 183524 kB' 'Shmem: 11797820 kB' 'KReclaimable: 186296 kB' 'Slab: 457932 kB' 'SReclaimable: 186296 kB' 'SUnreclaim: 271636 kB' 'KernelStack: 10016 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437036 kB' 'Committed_AS: 13281480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189820 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.586 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.587 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:09.588 02:16:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:09.588 02:17:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:09.588 02:17:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:09.588 02:17:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:09.588 02:17:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:09.588 02:17:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:09.588 02:17:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:09.588 02:17:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:09.588 02:17:00 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:11:09.588 02:17:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:09.588 02:17:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.588 02:17:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:09.849 ************************************ 00:11:09.849 START TEST default_setup 00:11:09.849 ************************************ 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:11:09.849 02:17:00 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:10.788 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:11:10.788 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:11:10.788 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:11:10.788 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:11:10.788 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:11:10.788 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:11:10.788 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:11:10.788 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:11:10.788 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:11:10.788 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:11:10.788 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:11:10.788 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:11:10.788 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:11:10.788 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:11:10.788 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:11:10.788 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:11:11.731 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32388440 kB' 'MemAvailable: 35976384 kB' 'Buffers: 2704 kB' 'Cached: 15758816 kB' 'SwapCached: 0 kB' 'Active: 12755772 kB' 'Inactive: 3522068 kB' 'Active(anon): 12314228 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519144 kB' 'Mapped: 183652 kB' 'Shmem: 11797908 kB' 'KReclaimable: 186432 kB' 'Slab: 458000 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271568 kB' 'KernelStack: 10368 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13304224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189996 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.731 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:11.732 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32385932 kB' 'MemAvailable: 35973876 kB' 'Buffers: 2704 kB' 'Cached: 15758816 kB' 'SwapCached: 0 kB' 'Active: 12754784 kB' 'Inactive: 3522068 kB' 'Active(anon): 12313240 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518524 kB' 'Mapped: 183624 kB' 'Shmem: 11797908 kB' 'KReclaimable: 186432 kB' 'Slab: 458008 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271576 kB' 'KernelStack: 10192 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189964 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.733 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:11.734 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32389284 kB' 'MemAvailable: 35977228 kB' 'Buffers: 2704 kB' 'Cached: 15758836 kB' 'SwapCached: 0 kB' 'Active: 12753620 kB' 'Inactive: 3522068 kB' 'Active(anon): 12312076 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517356 kB' 'Mapped: 183536 kB' 'Shmem: 11797928 kB' 'KReclaimable: 186432 kB' 'Slab: 457996 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271564 kB' 'KernelStack: 10080 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189852 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:11.735 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.000 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:12.001 nr_hugepages=1024 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:12.001 resv_hugepages=0 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:12.001 surplus_hugepages=0 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:12.001 anon_hugepages=0 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:12.001 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32388908 kB' 'MemAvailable: 35976852 kB' 'Buffers: 2704 kB' 'Cached: 15758856 kB' 'SwapCached: 0 kB' 'Active: 12753432 kB' 'Inactive: 3522068 kB' 'Active(anon): 12311888 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517128 kB' 'Mapped: 183536 kB' 'Shmem: 11797948 kB' 'KReclaimable: 186432 kB' 'Slab: 457996 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271564 kB' 'KernelStack: 10048 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189836 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.002 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.003 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 22611636 kB' 'MemUsed: 10223056 kB' 'SwapCached: 0 kB' 'Active: 7282164 kB' 'Inactive: 128392 kB' 'Active(anon): 7064796 kB' 'Inactive(anon): 0 kB' 'Active(file): 217368 kB' 'Inactive(file): 128392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7136904 kB' 'Mapped: 85768 kB' 'AnonPages: 276736 kB' 'Shmem: 6791144 kB' 'KernelStack: 5976 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95032 kB' 'Slab: 225796 kB' 'SReclaimable: 95032 kB' 'SUnreclaim: 130764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.004 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.005 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:12.006 node0=1024 expecting 1024 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:12.006 00:11:12.006 real 0m2.191s 00:11:12.006 user 0m0.589s 00:11:12.006 sys 0m0.782s 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.006 02:17:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:11:12.006 ************************************ 00:11:12.006 END TEST default_setup 00:11:12.006 ************************************ 00:11:12.006 02:17:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:11:12.006 02:17:02 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:11:12.006 02:17:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:12.006 02:17:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.006 02:17:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:12.006 ************************************ 00:11:12.006 START TEST per_node_1G_alloc 00:11:12.006 ************************************ 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:12.006 02:17:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:12.971 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:12.971 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:12.971 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:12.971 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:12.971 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:12.971 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:12.971 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:12.971 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:12.971 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:12.971 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:12.971 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:12.971 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:12.971 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:12.971 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:12.971 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:12.971 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:12.971 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32387668 kB' 'MemAvailable: 35975612 kB' 'Buffers: 2704 kB' 'Cached: 15758928 kB' 'SwapCached: 0 kB' 'Active: 12753832 kB' 'Inactive: 3522068 kB' 'Active(anon): 12312288 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516984 kB' 'Mapped: 183548 kB' 'Shmem: 11798020 kB' 'KReclaimable: 186432 kB' 'Slab: 458004 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271572 kB' 'KernelStack: 10032 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189852 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.238 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.239 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.240 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.241 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32387996 kB' 'MemAvailable: 35975940 kB' 'Buffers: 2704 kB' 'Cached: 15758928 kB' 'SwapCached: 0 kB' 'Active: 12753500 kB' 'Inactive: 3522068 kB' 'Active(anon): 12311956 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517168 kB' 'Mapped: 183548 kB' 'Shmem: 11798020 kB' 'KReclaimable: 186432 kB' 'Slab: 458084 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271652 kB' 'KernelStack: 10032 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189820 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.242 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.243 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:13.244 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32388272 kB' 'MemAvailable: 35976216 kB' 'Buffers: 2704 kB' 'Cached: 15758948 kB' 'SwapCached: 0 kB' 'Active: 12753492 kB' 'Inactive: 3522068 kB' 'Active(anon): 12311948 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517140 kB' 'Mapped: 183548 kB' 'Shmem: 11798040 kB' 'KReclaimable: 186432 kB' 'Slab: 458084 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271652 kB' 'KernelStack: 10000 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189804 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.245 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.246 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.247 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:13.248 nr_hugepages=1024 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:13.248 resv_hugepages=0 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:13.248 surplus_hugepages=0 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:13.248 anon_hugepages=0 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.248 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32388936 kB' 'MemAvailable: 35976880 kB' 'Buffers: 2704 kB' 'Cached: 15758972 kB' 'SwapCached: 0 kB' 'Active: 12753496 kB' 'Inactive: 3522068 kB' 'Active(anon): 12311952 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517100 kB' 'Mapped: 183548 kB' 'Shmem: 11798064 kB' 'KReclaimable: 186432 kB' 'Slab: 458084 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271652 kB' 'KernelStack: 9984 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189804 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.249 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.250 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 23660872 kB' 'MemUsed: 9173820 kB' 'SwapCached: 0 kB' 'Active: 7282692 kB' 'Inactive: 128392 kB' 'Active(anon): 7065324 kB' 'Inactive(anon): 0 kB' 'Active(file): 217368 kB' 'Inactive(file): 128392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7136908 kB' 'Mapped: 85780 kB' 'AnonPages: 277256 kB' 'Shmem: 6791148 kB' 'KernelStack: 5960 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95032 kB' 'Slab: 225864 kB' 'SReclaimable: 95032 kB' 'SUnreclaim: 130832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.251 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.252 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456476 kB' 'MemFree: 8727468 kB' 'MemUsed: 10729008 kB' 'SwapCached: 0 kB' 'Active: 5470952 kB' 'Inactive: 3393676 kB' 'Active(anon): 5246776 kB' 'Inactive(anon): 0 kB' 'Active(file): 224176 kB' 'Inactive(file): 3393676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8624812 kB' 'Mapped: 97768 kB' 'AnonPages: 239956 kB' 'Shmem: 5006960 kB' 'KernelStack: 4072 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91400 kB' 'Slab: 232220 kB' 'SReclaimable: 91400 kB' 'SUnreclaim: 140820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.253 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.254 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:13.255 node0=512 expecting 512 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:11:13.255 node1=512 expecting 512 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:13.255 00:11:13.255 real 0m1.266s 00:11:13.255 user 0m0.589s 00:11:13.255 sys 0m0.714s 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.255 02:17:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 ************************************ 00:11:13.255 END TEST per_node_1G_alloc 00:11:13.255 ************************************ 00:11:13.255 02:17:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:11:13.255 02:17:03 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:11:13.255 02:17:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:13.255 02:17:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.255 02:17:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 ************************************ 00:11:13.255 START TEST even_2G_alloc 00:11:13.255 ************************************ 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:13.255 02:17:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:14.192 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:14.192 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:14.192 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:14.192 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:14.192 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:14.192 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:14.192 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:14.192 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:14.192 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:14.192 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:14.192 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:14.192 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:14.192 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:14.192 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:14.192 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:14.192 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:14.192 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32388256 kB' 'MemAvailable: 35976200 kB' 'Buffers: 2704 kB' 'Cached: 15759056 kB' 'SwapCached: 0 kB' 'Active: 12753520 kB' 'Inactive: 3522068 kB' 'Active(anon): 12311976 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516976 kB' 'Mapped: 183624 kB' 'Shmem: 11798148 kB' 'KReclaimable: 186432 kB' 'Slab: 457732 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271300 kB' 'KernelStack: 10016 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189884 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.192 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32402132 kB' 'MemAvailable: 35990076 kB' 'Buffers: 2704 kB' 'Cached: 15759060 kB' 'SwapCached: 0 kB' 'Active: 12753816 kB' 'Inactive: 3522068 kB' 'Active(anon): 12312272 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517248 kB' 'Mapped: 183560 kB' 'Shmem: 11798152 kB' 'KReclaimable: 186432 kB' 'Slab: 457724 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271292 kB' 'KernelStack: 10032 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189852 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.193 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.456 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32402900 kB' 'MemAvailable: 35990844 kB' 'Buffers: 2704 kB' 'Cached: 15759076 kB' 'SwapCached: 0 kB' 'Active: 12753812 kB' 'Inactive: 3522068 kB' 'Active(anon): 12312268 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517236 kB' 'Mapped: 183560 kB' 'Shmem: 11798168 kB' 'KReclaimable: 186432 kB' 'Slab: 457800 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271368 kB' 'KernelStack: 10032 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189836 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.457 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:14.458 nr_hugepages=1024 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:14.458 resv_hugepages=0 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:14.458 surplus_hugepages=0 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:14.458 anon_hugepages=0 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:14.458 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32402900 kB' 'MemAvailable: 35990844 kB' 'Buffers: 2704 kB' 'Cached: 15759076 kB' 'SwapCached: 0 kB' 'Active: 12753488 kB' 'Inactive: 3522068 kB' 'Active(anon): 12311944 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516912 kB' 'Mapped: 183560 kB' 'Shmem: 11798168 kB' 'KReclaimable: 186432 kB' 'Slab: 457800 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271368 kB' 'KernelStack: 10016 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13302420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189852 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.459 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 23675096 kB' 'MemUsed: 9159596 kB' 'SwapCached: 0 kB' 'Active: 7283032 kB' 'Inactive: 128392 kB' 'Active(anon): 7065664 kB' 'Inactive(anon): 0 kB' 'Active(file): 217368 kB' 'Inactive(file): 128392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7136916 kB' 'Mapped: 85792 kB' 'AnonPages: 277600 kB' 'Shmem: 6791156 kB' 'KernelStack: 5992 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95032 kB' 'Slab: 225728 kB' 'SReclaimable: 95032 kB' 'SUnreclaim: 130696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.460 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456476 kB' 'MemFree: 8728408 kB' 'MemUsed: 10728068 kB' 'SwapCached: 0 kB' 'Active: 5470868 kB' 'Inactive: 3393676 kB' 'Active(anon): 5246692 kB' 'Inactive(anon): 0 kB' 'Active(file): 224176 kB' 'Inactive(file): 3393676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8624928 kB' 'Mapped: 97768 kB' 'AnonPages: 239660 kB' 'Shmem: 5007076 kB' 'KernelStack: 4056 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91400 kB' 'Slab: 232072 kB' 'SReclaimable: 91400 kB' 'SUnreclaim: 140672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.461 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:14.462 node0=512 expecting 512 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:11:14.462 node1=512 expecting 512 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:14.462 00:11:14.462 real 0m1.146s 00:11:14.462 user 0m0.502s 00:11:14.462 sys 0m0.678s 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.462 02:17:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:14.462 ************************************ 00:11:14.462 END TEST even_2G_alloc 00:11:14.462 ************************************ 00:11:14.462 02:17:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:11:14.462 02:17:04 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:11:14.462 02:17:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:14.462 02:17:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.462 02:17:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:14.462 ************************************ 00:11:14.462 START TEST odd_alloc 00:11:14.462 ************************************ 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:14.462 02:17:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:15.401 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:15.401 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:15.401 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:15.401 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:15.401 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:15.401 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:15.401 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:15.401 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:15.401 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:15.401 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:15.401 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:15.401 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:15.401 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:15.401 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:15.401 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:15.401 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:15.401 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:15.401 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:11:15.401 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:11:15.401 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:15.401 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:15.401 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:15.401 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:15.401 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:15.401 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32424408 kB' 'MemAvailable: 36012352 kB' 'Buffers: 2704 kB' 'Cached: 15759184 kB' 'SwapCached: 0 kB' 'Active: 12753136 kB' 'Inactive: 3522068 kB' 'Active(anon): 12311592 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516448 kB' 'Mapped: 182644 kB' 'Shmem: 11798276 kB' 'KReclaimable: 186432 kB' 'Slab: 457604 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271172 kB' 'KernelStack: 10000 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484588 kB' 'Committed_AS: 13268216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189820 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.402 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.403 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32423404 kB' 'MemAvailable: 36011348 kB' 'Buffers: 2704 kB' 'Cached: 15759188 kB' 'SwapCached: 0 kB' 'Active: 12753756 kB' 'Inactive: 3522068 kB' 'Active(anon): 12312212 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517044 kB' 'Mapped: 182620 kB' 'Shmem: 11798280 kB' 'KReclaimable: 186432 kB' 'Slab: 457580 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271148 kB' 'KernelStack: 9968 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484588 kB' 'Committed_AS: 13267864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189756 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.672 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.673 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32423556 kB' 'MemAvailable: 36011500 kB' 'Buffers: 2704 kB' 'Cached: 15759204 kB' 'SwapCached: 0 kB' 'Active: 12752880 kB' 'Inactive: 3522068 kB' 'Active(anon): 12311336 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516136 kB' 'Mapped: 182620 kB' 'Shmem: 11798296 kB' 'KReclaimable: 186432 kB' 'Slab: 457604 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271172 kB' 'KernelStack: 9936 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484588 kB' 'Committed_AS: 13267884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189740 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.674 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.675 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:11:15.676 nr_hugepages=1025 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:15.676 resv_hugepages=0 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:15.676 surplus_hugepages=0 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:15.676 anon_hugepages=0 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32431368 kB' 'MemAvailable: 36019312 kB' 'Buffers: 2704 kB' 'Cached: 15759228 kB' 'SwapCached: 0 kB' 'Active: 12749504 kB' 'Inactive: 3522068 kB' 'Active(anon): 12307960 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512780 kB' 'Mapped: 181644 kB' 'Shmem: 11798320 kB' 'KReclaimable: 186432 kB' 'Slab: 457596 kB' 'SReclaimable: 186432 kB' 'SUnreclaim: 271164 kB' 'KernelStack: 9888 kB' 'PageTables: 7196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484588 kB' 'Committed_AS: 13251380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189692 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.676 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.677 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 23702304 kB' 'MemUsed: 9132388 kB' 'SwapCached: 0 kB' 'Active: 7282076 kB' 'Inactive: 128392 kB' 'Active(anon): 7064708 kB' 'Inactive(anon): 0 kB' 'Active(file): 217368 kB' 'Inactive(file): 128392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7136956 kB' 'Mapped: 85052 kB' 'AnonPages: 276632 kB' 'Shmem: 6791196 kB' 'KernelStack: 5896 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95032 kB' 'Slab: 225628 kB' 'SReclaimable: 95032 kB' 'SUnreclaim: 130596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.678 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.679 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456476 kB' 'MemFree: 8729064 kB' 'MemUsed: 10727412 kB' 'SwapCached: 0 kB' 'Active: 5467100 kB' 'Inactive: 3393676 kB' 'Active(anon): 5242924 kB' 'Inactive(anon): 0 kB' 'Active(file): 224176 kB' 'Inactive(file): 3393676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8624976 kB' 'Mapped: 96592 kB' 'AnonPages: 235844 kB' 'Shmem: 5007124 kB' 'KernelStack: 3976 kB' 'PageTables: 3076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91392 kB' 'Slab: 231960 kB' 'SReclaimable: 91392 kB' 'SUnreclaim: 140568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.680 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:11:15.681 node0=512 expecting 513 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:11:15.681 node1=513 expecting 512 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:11:15.681 00:11:15.681 real 0m1.159s 00:11:15.681 user 0m0.522s 00:11:15.681 sys 0m0.670s 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.681 02:17:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:15.681 ************************************ 00:11:15.681 END TEST odd_alloc 00:11:15.681 ************************************ 00:11:15.681 02:17:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:11:15.681 02:17:05 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:11:15.681 02:17:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:15.681 02:17:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.681 02:17:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:15.681 ************************************ 00:11:15.681 START TEST custom_alloc 00:11:15.681 ************************************ 00:11:15.681 02:17:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:11:15.681 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:11:15.681 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:15.682 02:17:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:16.625 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:16.625 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:16.625 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:16.625 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:16.625 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:16.625 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:16.625 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:16.625 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:16.625 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:16.626 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:16.626 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:16.626 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:16.626 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:16.626 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:16.626 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:16.626 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:16.626 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 31382100 kB' 'MemAvailable: 34970040 kB' 'Buffers: 2704 kB' 'Cached: 15759316 kB' 'SwapCached: 0 kB' 'Active: 12749784 kB' 'Inactive: 3522068 kB' 'Active(anon): 12308240 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513032 kB' 'Mapped: 181712 kB' 'Shmem: 11798408 kB' 'KReclaimable: 186424 kB' 'Slab: 457616 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271192 kB' 'KernelStack: 9888 kB' 'PageTables: 7144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961324 kB' 'Committed_AS: 13251448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189756 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.626 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.627 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 31397668 kB' 'MemAvailable: 34985608 kB' 'Buffers: 2704 kB' 'Cached: 15759316 kB' 'SwapCached: 0 kB' 'Active: 12749372 kB' 'Inactive: 3522068 kB' 'Active(anon): 12307828 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512612 kB' 'Mapped: 181664 kB' 'Shmem: 11798408 kB' 'KReclaimable: 186424 kB' 'Slab: 457608 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271184 kB' 'KernelStack: 9872 kB' 'PageTables: 7056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961324 kB' 'Committed_AS: 13251464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189708 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.891 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.892 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 31397988 kB' 'MemAvailable: 34985928 kB' 'Buffers: 2704 kB' 'Cached: 15759336 kB' 'SwapCached: 0 kB' 'Active: 12749376 kB' 'Inactive: 3522068 kB' 'Active(anon): 12307832 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512628 kB' 'Mapped: 181664 kB' 'Shmem: 11798428 kB' 'KReclaimable: 186424 kB' 'Slab: 457664 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271240 kB' 'KernelStack: 9888 kB' 'PageTables: 7132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961324 kB' 'Committed_AS: 13251488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189708 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.893 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.894 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.895 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:11:16.896 nr_hugepages=1536 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:16.896 resv_hugepages=0 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:16.896 surplus_hugepages=0 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:16.896 anon_hugepages=0 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.896 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 31397988 kB' 'MemAvailable: 34985928 kB' 'Buffers: 2704 kB' 'Cached: 15759336 kB' 'SwapCached: 0 kB' 'Active: 12749528 kB' 'Inactive: 3522068 kB' 'Active(anon): 12307984 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512780 kB' 'Mapped: 181664 kB' 'Shmem: 11798428 kB' 'KReclaimable: 186424 kB' 'Slab: 457664 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271240 kB' 'KernelStack: 9888 kB' 'PageTables: 7132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961324 kB' 'Committed_AS: 13251508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189708 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.897 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.898 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:11:16.899 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 23708524 kB' 'MemUsed: 9126168 kB' 'SwapCached: 0 kB' 'Active: 7282340 kB' 'Inactive: 128392 kB' 'Active(anon): 7064972 kB' 'Inactive(anon): 0 kB' 'Active(file): 217368 kB' 'Inactive(file): 128392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7137020 kB' 'Mapped: 85064 kB' 'AnonPages: 276996 kB' 'Shmem: 6791260 kB' 'KernelStack: 5896 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95032 kB' 'Slab: 225588 kB' 'SReclaimable: 95032 kB' 'SUnreclaim: 130556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.900 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.901 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456476 kB' 'MemFree: 7687700 kB' 'MemUsed: 11768776 kB' 'SwapCached: 0 kB' 'Active: 5467752 kB' 'Inactive: 3393676 kB' 'Active(anon): 5243576 kB' 'Inactive(anon): 0 kB' 'Active(file): 224176 kB' 'Inactive(file): 3393676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8625080 kB' 'Mapped: 96660 kB' 'AnonPages: 236436 kB' 'Shmem: 5007228 kB' 'KernelStack: 4024 kB' 'PageTables: 2984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91392 kB' 'Slab: 232076 kB' 'SReclaimable: 91392 kB' 'SUnreclaim: 140684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.902 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.903 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:16.904 node0=512 expecting 512 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:11:16.904 node1=1024 expecting 1024 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:11:16.904 00:11:16.904 real 0m1.170s 00:11:16.904 user 0m0.534s 00:11:16.904 sys 0m0.661s 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:16.904 02:17:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:16.904 ************************************ 00:11:16.904 END TEST custom_alloc 00:11:16.904 ************************************ 00:11:16.904 02:17:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:11:16.904 02:17:07 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:11:16.904 02:17:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:16.904 02:17:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.904 02:17:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:16.904 ************************************ 00:11:16.904 START TEST no_shrink_alloc 00:11:16.904 ************************************ 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:16.904 02:17:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:17.844 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:17.844 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:17.844 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:17.844 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:17.844 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:17.844 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:17.844 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:17.844 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:17.844 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:17.844 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:17.844 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:17.844 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:17.844 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:17.844 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:17.844 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:17.844 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:17.844 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.844 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32425520 kB' 'MemAvailable: 36013460 kB' 'Buffers: 2704 kB' 'Cached: 15759436 kB' 'SwapCached: 0 kB' 'Active: 12750020 kB' 'Inactive: 3522068 kB' 'Active(anon): 12308476 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513088 kB' 'Mapped: 181688 kB' 'Shmem: 11798528 kB' 'KReclaimable: 186424 kB' 'Slab: 457652 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271228 kB' 'KernelStack: 9888 kB' 'PageTables: 7144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13251908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189740 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.845 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:17.846 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32425896 kB' 'MemAvailable: 36013836 kB' 'Buffers: 2704 kB' 'Cached: 15759440 kB' 'SwapCached: 0 kB' 'Active: 12749836 kB' 'Inactive: 3522068 kB' 'Active(anon): 12308292 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512908 kB' 'Mapped: 181672 kB' 'Shmem: 11798532 kB' 'KReclaimable: 186424 kB' 'Slab: 457620 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271196 kB' 'KernelStack: 9904 kB' 'PageTables: 7152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13251924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189708 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.111 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.112 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32425896 kB' 'MemAvailable: 36013836 kB' 'Buffers: 2704 kB' 'Cached: 15759460 kB' 'SwapCached: 0 kB' 'Active: 12749884 kB' 'Inactive: 3522068 kB' 'Active(anon): 12308340 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512924 kB' 'Mapped: 181672 kB' 'Shmem: 11798552 kB' 'KReclaimable: 186424 kB' 'Slab: 457628 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271204 kB' 'KernelStack: 9904 kB' 'PageTables: 7184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13251948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189708 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.113 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.114 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:18.115 nr_hugepages=1024 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:18.115 resv_hugepages=0 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:18.115 surplus_hugepages=0 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:18.115 anon_hugepages=0 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32425896 kB' 'MemAvailable: 36013836 kB' 'Buffers: 2704 kB' 'Cached: 15759500 kB' 'SwapCached: 0 kB' 'Active: 12749524 kB' 'Inactive: 3522068 kB' 'Active(anon): 12307980 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512532 kB' 'Mapped: 181672 kB' 'Shmem: 11798592 kB' 'KReclaimable: 186424 kB' 'Slab: 457628 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271204 kB' 'KernelStack: 9888 kB' 'PageTables: 7136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13251968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189708 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.115 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.116 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 22649112 kB' 'MemUsed: 10185580 kB' 'SwapCached: 0 kB' 'Active: 7282040 kB' 'Inactive: 128392 kB' 'Active(anon): 7064672 kB' 'Inactive(anon): 0 kB' 'Active(file): 217368 kB' 'Inactive(file): 128392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7137064 kB' 'Mapped: 85076 kB' 'AnonPages: 276472 kB' 'Shmem: 6791304 kB' 'KernelStack: 5896 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95032 kB' 'Slab: 225584 kB' 'SReclaimable: 95032 kB' 'SUnreclaim: 130552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.117 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.118 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:18.119 node0=1024 expecting 1024 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:18.119 02:17:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:19.063 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:19.063 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:19.063 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:19.063 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:19.063 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:19.063 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:19.063 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:19.063 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:19.063 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:19.063 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:11:19.063 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:11:19.063 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:11:19.063 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:11:19.063 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:11:19.063 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:11:19.063 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:11:19.063 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:11:19.063 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32411808 kB' 'MemAvailable: 35999748 kB' 'Buffers: 2704 kB' 'Cached: 15759544 kB' 'SwapCached: 0 kB' 'Active: 12749996 kB' 'Inactive: 3522068 kB' 'Active(anon): 12308452 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513000 kB' 'Mapped: 181792 kB' 'Shmem: 11798636 kB' 'KReclaimable: 186424 kB' 'Slab: 457792 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271368 kB' 'KernelStack: 9920 kB' 'PageTables: 7272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13252172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189692 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.063 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.064 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32411808 kB' 'MemAvailable: 35999748 kB' 'Buffers: 2704 kB' 'Cached: 15759548 kB' 'SwapCached: 0 kB' 'Active: 12750336 kB' 'Inactive: 3522068 kB' 'Active(anon): 12308792 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513376 kB' 'Mapped: 181792 kB' 'Shmem: 11798640 kB' 'KReclaimable: 186424 kB' 'Slab: 457792 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271368 kB' 'KernelStack: 9920 kB' 'PageTables: 7264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13252192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189676 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.065 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.066 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32412272 kB' 'MemAvailable: 36000212 kB' 'Buffers: 2704 kB' 'Cached: 15759548 kB' 'SwapCached: 0 kB' 'Active: 12749716 kB' 'Inactive: 3522068 kB' 'Active(anon): 12308172 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512692 kB' 'Mapped: 181680 kB' 'Shmem: 11798640 kB' 'KReclaimable: 186424 kB' 'Slab: 457776 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271352 kB' 'KernelStack: 9904 kB' 'PageTables: 7192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13252212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189676 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.067 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.068 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.069 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.332 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:19.333 nr_hugepages=1024 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:19.333 resv_hugepages=0 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:19.333 surplus_hugepages=0 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:19.333 anon_hugepages=0 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291168 kB' 'MemFree: 32412272 kB' 'MemAvailable: 36000212 kB' 'Buffers: 2704 kB' 'Cached: 15759588 kB' 'SwapCached: 0 kB' 'Active: 12750028 kB' 'Inactive: 3522068 kB' 'Active(anon): 12308484 kB' 'Inactive(anon): 0 kB' 'Active(file): 441544 kB' 'Inactive(file): 3522068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512980 kB' 'Mapped: 181680 kB' 'Shmem: 11798680 kB' 'KReclaimable: 186424 kB' 'Slab: 457768 kB' 'SReclaimable: 186424 kB' 'SUnreclaim: 271344 kB' 'KernelStack: 9904 kB' 'PageTables: 7192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485612 kB' 'Committed_AS: 13252236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189676 kB' 'VmallocChunk: 0 kB' 'Percpu: 21376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2021668 kB' 'DirectMap2M: 25163776 kB' 'DirectMap1G: 33554432 kB' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.333 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.334 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 22636452 kB' 'MemUsed: 10198240 kB' 'SwapCached: 0 kB' 'Active: 7281948 kB' 'Inactive: 128392 kB' 'Active(anon): 7064580 kB' 'Inactive(anon): 0 kB' 'Active(file): 217368 kB' 'Inactive(file): 128392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7137068 kB' 'Mapped: 85084 kB' 'AnonPages: 276352 kB' 'Shmem: 6791308 kB' 'KernelStack: 5880 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95032 kB' 'Slab: 225736 kB' 'SReclaimable: 95032 kB' 'SUnreclaim: 130704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.335 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:19.336 node0=1024 expecting 1024 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:19.336 00:11:19.336 real 0m2.317s 00:11:19.336 user 0m1.059s 00:11:19.336 sys 0m1.318s 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.336 02:17:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:19.336 ************************************ 00:11:19.336 END TEST no_shrink_alloc 00:11:19.336 ************************************ 00:11:19.336 02:17:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:11:19.336 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:11:19.336 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:19.336 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:19.336 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:19.336 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:19.336 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:19.336 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:19.337 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:19.337 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:19.337 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:19.337 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:19.337 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:19.337 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:19.337 02:17:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:19.337 00:11:19.337 real 0m9.689s 00:11:19.337 user 0m3.982s 00:11:19.337 sys 0m5.095s 00:11:19.337 02:17:09 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.337 02:17:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:19.337 ************************************ 00:11:19.337 END TEST hugepages 00:11:19.337 ************************************ 00:11:19.337 02:17:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:11:19.337 02:17:09 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:11:19.337 02:17:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:19.337 02:17:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.337 02:17:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:19.337 ************************************ 00:11:19.337 START TEST driver 00:11:19.337 ************************************ 00:11:19.337 02:17:09 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:11:19.337 * Looking for test storage... 00:11:19.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:11:19.337 02:17:09 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:11:19.337 02:17:09 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:19.337 02:17:09 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:21.875 02:17:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:11:21.875 02:17:11 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:21.875 02:17:11 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.875 02:17:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:21.875 ************************************ 00:11:21.875 START TEST guess_driver 00:11:21.875 ************************************ 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:11:21.875 02:17:11 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:11:21.875 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:11:21.875 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:11:21.875 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:11:21.875 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:11:21.875 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:11:21.875 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:11:21.875 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:11:21.875 Looking for driver=vfio-pci 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:11:21.875 02:17:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:22.816 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.816 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.816 02:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.816 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:22.817 02:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:23.753 02:17:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:23.753 02:17:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:23.753 02:17:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:23.753 02:17:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:11:23.753 02:17:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:11:23.753 02:17:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:23.753 02:17:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:26.293 00:11:26.293 real 0m4.393s 00:11:26.293 user 0m0.979s 00:11:26.293 sys 0m1.666s 00:11:26.293 02:17:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.293 02:17:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:11:26.293 ************************************ 00:11:26.293 END TEST guess_driver 00:11:26.293 ************************************ 00:11:26.293 02:17:16 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:11:26.293 00:11:26.293 real 0m6.777s 00:11:26.293 user 0m1.503s 00:11:26.293 sys 0m2.627s 00:11:26.293 02:17:16 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.293 02:17:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:26.293 ************************************ 00:11:26.293 END TEST driver 00:11:26.293 ************************************ 00:11:26.293 02:17:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:11:26.293 02:17:16 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:11:26.293 02:17:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:26.293 02:17:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.293 02:17:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:26.293 ************************************ 00:11:26.293 START TEST devices 00:11:26.293 ************************************ 00:11:26.293 02:17:16 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:11:26.293 * Looking for test storage... 00:11:26.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:11:26.293 02:17:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:11:26.293 02:17:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:11:26.293 02:17:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:26.293 02:17:16 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:11:27.677 02:17:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:11:27.677 02:17:17 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:11:27.677 No valid GPT data, bailing 00:11:27.677 02:17:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:27.677 02:17:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:27.677 02:17:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:11:27.677 02:17:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:27.677 02:17:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:27.677 02:17:17 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:11:27.677 02:17:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.677 02:17:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:27.677 ************************************ 00:11:27.677 START TEST nvme_mount 00:11:27.677 ************************************ 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:27.677 02:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:11:28.617 Creating new GPT entries in memory. 00:11:28.618 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:28.618 other utilities. 00:11:28.618 02:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:28.618 02:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:28.618 02:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:28.618 02:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:28.618 02:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:11:29.556 Creating new GPT entries in memory. 00:11:29.556 The operation has completed successfully. 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1736396 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:29.556 02:17:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.495 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:30.496 02:17:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:30.769 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:30.769 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:31.075 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:11:31.075 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:11:31.075 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:31.075 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:31.075 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:31.076 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:31.076 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:31.076 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:11:31.076 02:17:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:31.076 02:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:31.076 02:17:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.015 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:32.016 02:17:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:32.953 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:33.214 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:33.214 00:11:33.214 real 0m5.593s 00:11:33.214 user 0m1.309s 00:11:33.214 sys 0m1.992s 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.214 02:17:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:11:33.214 ************************************ 00:11:33.214 END TEST nvme_mount 00:11:33.214 ************************************ 00:11:33.214 02:17:23 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:11:33.214 02:17:23 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:11:33.214 02:17:23 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:33.214 02:17:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.214 02:17:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:33.214 ************************************ 00:11:33.214 START TEST dm_mount 00:11:33.214 ************************************ 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:33.214 02:17:23 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:34.151 Creating new GPT entries in memory. 00:11:34.151 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:34.151 other utilities. 00:11:34.151 02:17:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:34.151 02:17:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:34.151 02:17:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:34.151 02:17:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:34.151 02:17:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:11:35.532 Creating new GPT entries in memory. 00:11:35.532 The operation has completed successfully. 00:11:35.532 02:17:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:35.532 02:17:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:35.532 02:17:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:35.532 02:17:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:35.532 02:17:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:11:36.473 The operation has completed successfully. 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1738172 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:11:36.473 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:36.474 02:17:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.041 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:37.301 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:11:37.302 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:37.302 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:37.302 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:37.302 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:37.302 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:11:37.302 02:17:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:37.302 02:17:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:37.302 02:17:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.241 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:11:38.242 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:11:38.242 00:11:38.242 real 0m5.160s 00:11:38.242 user 0m0.797s 00:11:38.242 sys 0m1.296s 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.242 02:17:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:11:38.242 ************************************ 00:11:38.242 END TEST dm_mount 00:11:38.242 ************************************ 00:11:38.502 02:17:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:11:38.502 02:17:28 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:11:38.502 02:17:28 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:11:38.502 02:17:28 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:38.502 02:17:28 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:38.502 02:17:28 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:38.502 02:17:28 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:38.502 02:17:28 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:38.762 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:11:38.762 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:11:38.762 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:38.762 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:38.762 02:17:28 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:11:38.762 02:17:28 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:38.762 02:17:28 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:38.762 02:17:28 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:38.762 02:17:28 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:38.762 02:17:28 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:11:38.762 02:17:28 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:11:38.762 00:11:38.762 real 0m12.490s 00:11:38.762 user 0m2.718s 00:11:38.762 sys 0m4.223s 00:11:38.762 02:17:28 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.762 02:17:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:38.762 ************************************ 00:11:38.762 END TEST devices 00:11:38.762 ************************************ 00:11:38.762 02:17:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:11:38.762 00:11:38.762 real 0m38.486s 00:11:38.762 user 0m11.299s 00:11:38.762 sys 0m16.739s 00:11:38.762 02:17:28 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.762 02:17:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:38.762 ************************************ 00:11:38.762 END TEST setup.sh 00:11:38.762 ************************************ 00:11:38.762 02:17:29 -- common/autotest_common.sh@1142 -- # return 0 00:11:38.762 02:17:29 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:11:39.701 Hugepages 00:11:39.701 node hugesize free / total 00:11:39.701 node0 1048576kB 0 / 0 00:11:39.701 node0 2048kB 2048 / 2048 00:11:39.701 node1 1048576kB 0 / 0 00:11:39.701 node1 2048kB 0 / 0 00:11:39.701 00:11:39.701 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:39.701 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:11:39.701 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:11:39.701 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:11:39.701 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:11:39.701 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:11:39.701 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:11:39.701 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:11:39.701 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:11:39.701 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:11:39.701 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:11:39.701 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:11:39.701 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:11:39.701 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:11:39.701 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:11:39.701 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:11:39.701 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:11:39.701 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:11:39.701 02:17:30 -- spdk/autotest.sh@130 -- # uname -s 00:11:39.701 02:17:30 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:11:39.701 02:17:30 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:11:39.701 02:17:30 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:40.640 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:11:40.899 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:11:40.899 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:11:40.899 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:11:40.899 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:11:40.899 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:11:40.899 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:11:40.899 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:11:40.899 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:11:40.899 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:11:40.899 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:11:40.899 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:11:40.899 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:11:40.899 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:11:40.899 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:11:40.899 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:11:41.839 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:11:41.839 02:17:32 -- common/autotest_common.sh@1532 -- # sleep 1 00:11:42.776 02:17:33 -- common/autotest_common.sh@1533 -- # bdfs=() 00:11:42.776 02:17:33 -- common/autotest_common.sh@1533 -- # local bdfs 00:11:42.776 02:17:33 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:11:42.776 02:17:33 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:11:42.776 02:17:33 -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:42.776 02:17:33 -- common/autotest_common.sh@1513 -- # local bdfs 00:11:42.776 02:17:33 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:42.776 02:17:33 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:42.776 02:17:33 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:11:43.035 02:17:33 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:11:43.035 02:17:33 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:11:43.035 02:17:33 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:43.986 Waiting for block devices as requested 00:11:43.986 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:11:43.986 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:11:43.986 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:11:43.986 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:11:44.245 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:11:44.245 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:11:44.245 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:11:44.245 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:11:44.504 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:11:44.505 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:11:44.505 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:11:44.763 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:11:44.763 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:11:44.763 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:11:44.763 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:11:45.022 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:11:45.022 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:11:45.022 02:17:35 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:11:45.022 02:17:35 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:11:45.022 02:17:35 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:11:45.022 02:17:35 -- common/autotest_common.sh@1502 -- # grep 0000:84:00.0/nvme/nvme 00:11:45.022 02:17:35 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:11:45.022 02:17:35 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:11:45.022 02:17:35 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:11:45.022 02:17:35 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:11:45.022 02:17:35 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:11:45.022 02:17:35 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:11:45.022 02:17:35 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:11:45.022 02:17:35 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:11:45.022 02:17:35 -- common/autotest_common.sh@1545 -- # grep oacs 00:11:45.022 02:17:35 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:11:45.022 02:17:35 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:11:45.022 02:17:35 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:11:45.022 02:17:35 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:11:45.022 02:17:35 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:11:45.022 02:17:35 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:11:45.022 02:17:35 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:11:45.022 02:17:35 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:11:45.022 02:17:35 -- common/autotest_common.sh@1557 -- # continue 00:11:45.022 02:17:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:11:45.022 02:17:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:45.022 02:17:35 -- common/autotest_common.sh@10 -- # set +x 00:11:45.022 02:17:35 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:11:45.022 02:17:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:45.022 02:17:35 -- common/autotest_common.sh@10 -- # set +x 00:11:45.022 02:17:35 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:46.401 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:11:46.401 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:11:46.401 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:11:46.401 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:11:46.401 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:11:46.401 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:11:46.401 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:11:46.401 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:11:46.401 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:11:46.401 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:11:46.401 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:11:46.401 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:11:46.401 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:11:46.401 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:11:46.401 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:11:46.401 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:11:46.969 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:11:47.228 02:17:37 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:11:47.228 02:17:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:47.228 02:17:37 -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 02:17:37 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:11:47.228 02:17:37 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:11:47.228 02:17:37 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:11:47.228 02:17:37 -- common/autotest_common.sh@1577 -- # bdfs=() 00:11:47.228 02:17:37 -- common/autotest_common.sh@1577 -- # local bdfs 00:11:47.228 02:17:37 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:11:47.228 02:17:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:47.228 02:17:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:11:47.228 02:17:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:47.228 02:17:37 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:11:47.228 02:17:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:47.228 02:17:37 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:11:47.228 02:17:37 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:11:47.228 02:17:37 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:11:47.228 02:17:37 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:11:47.228 02:17:37 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:11:47.228 02:17:37 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:11:47.228 02:17:37 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:11:47.228 02:17:37 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:84:00.0 00:11:47.228 02:17:37 -- common/autotest_common.sh@1592 -- # [[ -z 0000:84:00.0 ]] 00:11:47.228 02:17:37 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1742175 00:11:47.228 02:17:37 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:11:47.228 02:17:37 -- common/autotest_common.sh@1598 -- # waitforlisten 1742175 00:11:47.228 02:17:37 -- common/autotest_common.sh@829 -- # '[' -z 1742175 ']' 00:11:47.228 02:17:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.228 02:17:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.228 02:17:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.228 02:17:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.228 02:17:37 -- common/autotest_common.sh@10 -- # set +x 00:11:47.228 [2024-07-11 02:17:37.590109] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:47.228 [2024-07-11 02:17:37.590220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742175 ] 00:11:47.228 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.486 [2024-07-11 02:17:37.650250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.486 [2024-07-11 02:17:37.741858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.745 02:17:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.745 02:17:37 -- common/autotest_common.sh@862 -- # return 0 00:11:47.745 02:17:37 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:11:47.745 02:17:37 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:11:47.745 02:17:37 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:11:51.032 nvme0n1 00:11:51.032 02:17:41 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:11:51.032 [2024-07-11 02:17:41.358359] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:11:51.032 [2024-07-11 02:17:41.358404] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:11:51.032 request: 00:11:51.032 { 00:11:51.032 "nvme_ctrlr_name": "nvme0", 00:11:51.032 "password": "test", 00:11:51.032 "method": "bdev_nvme_opal_revert", 00:11:51.032 "req_id": 1 00:11:51.032 } 00:11:51.032 Got JSON-RPC error response 00:11:51.032 response: 00:11:51.032 { 00:11:51.032 "code": -32603, 00:11:51.032 "message": "Internal error" 00:11:51.032 } 00:11:51.032 02:17:41 -- common/autotest_common.sh@1604 -- # true 00:11:51.032 02:17:41 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:11:51.032 02:17:41 -- common/autotest_common.sh@1608 -- # killprocess 1742175 00:11:51.032 02:17:41 -- common/autotest_common.sh@948 -- # '[' -z 1742175 ']' 00:11:51.032 02:17:41 -- common/autotest_common.sh@952 -- # kill -0 1742175 00:11:51.032 02:17:41 -- common/autotest_common.sh@953 -- # uname 00:11:51.032 02:17:41 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:51.032 02:17:41 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1742175 00:11:51.032 02:17:41 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:51.032 02:17:41 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:51.032 02:17:41 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1742175' 00:11:51.032 killing process with pid 1742175 00:11:51.032 02:17:41 -- common/autotest_common.sh@967 -- # kill 1742175 00:11:51.032 02:17:41 -- common/autotest_common.sh@972 -- # wait 1742175 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.032 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.033 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.291 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:51.292 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:11:52.665 02:17:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:11:52.665 02:17:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:11:52.665 02:17:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:52.665 02:17:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:52.665 02:17:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:11:52.665 02:17:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.665 02:17:43 -- common/autotest_common.sh@10 -- # set +x 00:11:52.665 02:17:43 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:11:52.665 02:17:43 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:11:52.665 02:17:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:52.665 02:17:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.665 02:17:43 -- common/autotest_common.sh@10 -- # set +x 00:11:52.924 ************************************ 00:11:52.924 START TEST env 00:11:52.924 ************************************ 00:11:52.924 02:17:43 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:11:52.924 * Looking for test storage... 00:11:52.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:11:52.924 02:17:43 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:11:52.924 02:17:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:52.924 02:17:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.924 02:17:43 env -- common/autotest_common.sh@10 -- # set +x 00:11:52.924 ************************************ 00:11:52.924 START TEST env_memory 00:11:52.924 ************************************ 00:11:52.924 02:17:43 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:11:52.924 00:11:52.924 00:11:52.924 CUnit - A unit testing framework for C - Version 2.1-3 00:11:52.924 http://cunit.sourceforge.net/ 00:11:52.924 00:11:52.924 00:11:52.924 Suite: memory 00:11:52.924 Test: alloc and free memory map ...[2024-07-11 02:17:43.227161] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:52.924 passed 00:11:52.924 Test: mem map translation ...[2024-07-11 02:17:43.258222] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:52.924 [2024-07-11 02:17:43.258252] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:52.924 [2024-07-11 02:17:43.258307] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:52.924 [2024-07-11 02:17:43.258322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:52.924 passed 00:11:52.924 Test: mem map registration ...[2024-07-11 02:17:43.322215] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:11:52.924 [2024-07-11 02:17:43.322238] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:11:52.924 passed 00:11:53.183 Test: mem map adjacent registrations ...passed 00:11:53.183 00:11:53.183 Run Summary: Type Total Ran Passed Failed Inactive 00:11:53.183 suites 1 1 n/a 0 0 00:11:53.183 tests 4 4 4 0 0 00:11:53.183 asserts 152 152 152 0 n/a 00:11:53.183 00:11:53.183 Elapsed time = 0.217 seconds 00:11:53.183 00:11:53.183 real 0m0.226s 00:11:53.183 user 0m0.216s 00:11:53.183 sys 0m0.009s 00:11:53.183 02:17:43 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.184 02:17:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:53.184 ************************************ 00:11:53.184 END TEST env_memory 00:11:53.184 ************************************ 00:11:53.184 02:17:43 env -- common/autotest_common.sh@1142 -- # return 0 00:11:53.184 02:17:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:11:53.184 02:17:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:53.184 02:17:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.184 02:17:43 env -- common/autotest_common.sh@10 -- # set +x 00:11:53.184 ************************************ 00:11:53.184 START TEST env_vtophys 00:11:53.184 ************************************ 00:11:53.184 02:17:43 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:11:53.184 EAL: lib.eal log level changed from notice to debug 00:11:53.184 EAL: Detected lcore 0 as core 0 on socket 0 00:11:53.184 EAL: Detected lcore 1 as core 1 on socket 0 00:11:53.184 EAL: Detected lcore 2 as core 2 on socket 0 00:11:53.184 EAL: Detected lcore 3 as core 3 on socket 0 00:11:53.184 EAL: Detected lcore 4 as core 4 on socket 0 00:11:53.184 EAL: Detected lcore 5 as core 5 on socket 0 00:11:53.184 EAL: Detected lcore 6 as core 6 on socket 0 00:11:53.184 EAL: Detected lcore 7 as core 7 on socket 0 00:11:53.184 EAL: Detected lcore 8 as core 0 on socket 1 00:11:53.184 EAL: Detected lcore 9 as core 1 on socket 1 00:11:53.184 EAL: Detected lcore 10 as core 2 on socket 1 00:11:53.184 EAL: Detected lcore 11 as core 3 on socket 1 00:11:53.184 EAL: Detected lcore 12 as core 4 on socket 1 00:11:53.184 EAL: Detected lcore 13 as core 5 on socket 1 00:11:53.184 EAL: Detected lcore 14 as core 6 on socket 1 00:11:53.184 EAL: Detected lcore 15 as core 7 on socket 1 00:11:53.184 EAL: Detected lcore 16 as core 0 on socket 0 00:11:53.184 EAL: Detected lcore 17 as core 1 on socket 0 00:11:53.184 EAL: Detected lcore 18 as core 2 on socket 0 00:11:53.184 EAL: Detected lcore 19 as core 3 on socket 0 00:11:53.184 EAL: Detected lcore 20 as core 4 on socket 0 00:11:53.184 EAL: Detected lcore 21 as core 5 on socket 0 00:11:53.184 EAL: Detected lcore 22 as core 6 on socket 0 00:11:53.184 EAL: Detected lcore 23 as core 7 on socket 0 00:11:53.184 EAL: Detected lcore 24 as core 0 on socket 1 00:11:53.184 EAL: Detected lcore 25 as core 1 on socket 1 00:11:53.184 EAL: Detected lcore 26 as core 2 on socket 1 00:11:53.184 EAL: Detected lcore 27 as core 3 on socket 1 00:11:53.184 EAL: Detected lcore 28 as core 4 on socket 1 00:11:53.184 EAL: Detected lcore 29 as core 5 on socket 1 00:11:53.184 EAL: Detected lcore 30 as core 6 on socket 1 00:11:53.184 EAL: Detected lcore 31 as core 7 on socket 1 00:11:53.184 EAL: Maximum logical cores by configuration: 128 00:11:53.184 EAL: Detected CPU lcores: 32 00:11:53.184 EAL: Detected NUMA nodes: 2 00:11:53.184 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:11:53.184 EAL: Detected shared linkage of DPDK 00:11:53.184 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:11:53.184 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:11:53.184 EAL: Registered [vdev] bus. 00:11:53.184 EAL: bus.vdev log level changed from disabled to notice 00:11:53.184 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:11:53.184 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:11:53.184 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:11:53.184 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:11:53.184 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:11:53.184 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:11:53.184 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:11:53.184 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:11:53.184 EAL: No shared files mode enabled, IPC will be disabled 00:11:53.184 EAL: No shared files mode enabled, IPC is disabled 00:11:53.184 EAL: Bus pci wants IOVA as 'DC' 00:11:53.184 EAL: Bus vdev wants IOVA as 'DC' 00:11:53.184 EAL: Buses did not request a specific IOVA mode. 00:11:53.184 EAL: IOMMU is available, selecting IOVA as VA mode. 00:11:53.184 EAL: Selected IOVA mode 'VA' 00:11:53.184 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.184 EAL: Probing VFIO support... 00:11:53.184 EAL: IOMMU type 1 (Type 1) is supported 00:11:53.184 EAL: IOMMU type 7 (sPAPR) is not supported 00:11:53.184 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:11:53.184 EAL: VFIO support initialized 00:11:53.184 EAL: Ask a virtual area of 0x2e000 bytes 00:11:53.184 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:53.184 EAL: Setting up physically contiguous memory... 00:11:53.184 EAL: Setting maximum number of open files to 524288 00:11:53.184 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:53.184 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:11:53.184 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:53.184 EAL: Ask a virtual area of 0x61000 bytes 00:11:53.184 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:53.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:53.184 EAL: Ask a virtual area of 0x400000000 bytes 00:11:53.184 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:53.184 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:53.184 EAL: Ask a virtual area of 0x61000 bytes 00:11:53.184 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:53.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:53.184 EAL: Ask a virtual area of 0x400000000 bytes 00:11:53.184 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:53.184 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:53.184 EAL: Ask a virtual area of 0x61000 bytes 00:11:53.184 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:53.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:53.184 EAL: Ask a virtual area of 0x400000000 bytes 00:11:53.184 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:53.184 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:53.184 EAL: Ask a virtual area of 0x61000 bytes 00:11:53.184 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:53.184 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:53.184 EAL: Ask a virtual area of 0x400000000 bytes 00:11:53.184 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:53.184 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:53.184 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:11:53.184 EAL: Ask a virtual area of 0x61000 bytes 00:11:53.184 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:11:53.184 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:53.184 EAL: Ask a virtual area of 0x400000000 bytes 00:11:53.184 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:11:53.184 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:11:53.184 EAL: Ask a virtual area of 0x61000 bytes 00:11:53.184 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:11:53.184 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:53.184 EAL: Ask a virtual area of 0x400000000 bytes 00:11:53.184 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:11:53.184 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:11:53.184 EAL: Ask a virtual area of 0x61000 bytes 00:11:53.184 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:11:53.184 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:53.184 EAL: Ask a virtual area of 0x400000000 bytes 00:11:53.184 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:11:53.184 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:11:53.184 EAL: Ask a virtual area of 0x61000 bytes 00:11:53.184 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:11:53.184 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:53.184 EAL: Ask a virtual area of 0x400000000 bytes 00:11:53.184 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:11:53.184 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:11:53.184 EAL: Hugepages will be freed exactly as allocated. 00:11:53.184 EAL: No shared files mode enabled, IPC is disabled 00:11:53.184 EAL: No shared files mode enabled, IPC is disabled 00:11:53.184 EAL: TSC frequency is ~2700000 KHz 00:11:53.184 EAL: Main lcore 0 is ready (tid=7f9936ccea00;cpuset=[0]) 00:11:53.184 EAL: Trying to obtain current memory policy. 00:11:53.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.184 EAL: Restoring previous memory policy: 0 00:11:53.184 EAL: request: mp_malloc_sync 00:11:53.184 EAL: No shared files mode enabled, IPC is disabled 00:11:53.184 EAL: Heap on socket 0 was expanded by 2MB 00:11:53.184 EAL: No shared files mode enabled, IPC is disabled 00:11:53.184 EAL: No shared files mode enabled, IPC is disabled 00:11:53.184 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:53.184 EAL: Mem event callback 'spdk:(nil)' registered 00:11:53.184 00:11:53.184 00:11:53.184 CUnit - A unit testing framework for C - Version 2.1-3 00:11:53.184 http://cunit.sourceforge.net/ 00:11:53.184 00:11:53.184 00:11:53.184 Suite: components_suite 00:11:53.184 Test: vtophys_malloc_test ...passed 00:11:53.184 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:53.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.184 EAL: Restoring previous memory policy: 4 00:11:53.184 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.184 EAL: request: mp_malloc_sync 00:11:53.184 EAL: No shared files mode enabled, IPC is disabled 00:11:53.184 EAL: Heap on socket 0 was expanded by 4MB 00:11:53.184 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.184 EAL: request: mp_malloc_sync 00:11:53.184 EAL: No shared files mode enabled, IPC is disabled 00:11:53.184 EAL: Heap on socket 0 was shrunk by 4MB 00:11:53.184 EAL: Trying to obtain current memory policy. 00:11:53.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.184 EAL: Restoring previous memory policy: 4 00:11:53.184 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.184 EAL: request: mp_malloc_sync 00:11:53.184 EAL: No shared files mode enabled, IPC is disabled 00:11:53.184 EAL: Heap on socket 0 was expanded by 6MB 00:11:53.184 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.184 EAL: request: mp_malloc_sync 00:11:53.185 EAL: No shared files mode enabled, IPC is disabled 00:11:53.185 EAL: Heap on socket 0 was shrunk by 6MB 00:11:53.185 EAL: Trying to obtain current memory policy. 00:11:53.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.185 EAL: Restoring previous memory policy: 4 00:11:53.185 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.185 EAL: request: mp_malloc_sync 00:11:53.185 EAL: No shared files mode enabled, IPC is disabled 00:11:53.185 EAL: Heap on socket 0 was expanded by 10MB 00:11:53.185 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.185 EAL: request: mp_malloc_sync 00:11:53.185 EAL: No shared files mode enabled, IPC is disabled 00:11:53.185 EAL: Heap on socket 0 was shrunk by 10MB 00:11:53.185 EAL: Trying to obtain current memory policy. 00:11:53.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.185 EAL: Restoring previous memory policy: 4 00:11:53.185 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.185 EAL: request: mp_malloc_sync 00:11:53.185 EAL: No shared files mode enabled, IPC is disabled 00:11:53.185 EAL: Heap on socket 0 was expanded by 18MB 00:11:53.185 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.185 EAL: request: mp_malloc_sync 00:11:53.185 EAL: No shared files mode enabled, IPC is disabled 00:11:53.185 EAL: Heap on socket 0 was shrunk by 18MB 00:11:53.185 EAL: Trying to obtain current memory policy. 00:11:53.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.185 EAL: Restoring previous memory policy: 4 00:11:53.185 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.185 EAL: request: mp_malloc_sync 00:11:53.185 EAL: No shared files mode enabled, IPC is disabled 00:11:53.185 EAL: Heap on socket 0 was expanded by 34MB 00:11:53.185 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.185 EAL: request: mp_malloc_sync 00:11:53.185 EAL: No shared files mode enabled, IPC is disabled 00:11:53.185 EAL: Heap on socket 0 was shrunk by 34MB 00:11:53.185 EAL: Trying to obtain current memory policy. 00:11:53.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.185 EAL: Restoring previous memory policy: 4 00:11:53.185 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.185 EAL: request: mp_malloc_sync 00:11:53.185 EAL: No shared files mode enabled, IPC is disabled 00:11:53.185 EAL: Heap on socket 0 was expanded by 66MB 00:11:53.185 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.185 EAL: request: mp_malloc_sync 00:11:53.185 EAL: No shared files mode enabled, IPC is disabled 00:11:53.185 EAL: Heap on socket 0 was shrunk by 66MB 00:11:53.185 EAL: Trying to obtain current memory policy. 00:11:53.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.442 EAL: Restoring previous memory policy: 4 00:11:53.442 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.442 EAL: request: mp_malloc_sync 00:11:53.442 EAL: No shared files mode enabled, IPC is disabled 00:11:53.442 EAL: Heap on socket 0 was expanded by 130MB 00:11:53.442 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.442 EAL: request: mp_malloc_sync 00:11:53.442 EAL: No shared files mode enabled, IPC is disabled 00:11:53.442 EAL: Heap on socket 0 was shrunk by 130MB 00:11:53.442 EAL: Trying to obtain current memory policy. 00:11:53.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.442 EAL: Restoring previous memory policy: 4 00:11:53.442 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.443 EAL: request: mp_malloc_sync 00:11:53.443 EAL: No shared files mode enabled, IPC is disabled 00:11:53.443 EAL: Heap on socket 0 was expanded by 258MB 00:11:53.443 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.443 EAL: request: mp_malloc_sync 00:11:53.443 EAL: No shared files mode enabled, IPC is disabled 00:11:53.443 EAL: Heap on socket 0 was shrunk by 258MB 00:11:53.443 EAL: Trying to obtain current memory policy. 00:11:53.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:53.721 EAL: Restoring previous memory policy: 4 00:11:53.721 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.721 EAL: request: mp_malloc_sync 00:11:53.721 EAL: No shared files mode enabled, IPC is disabled 00:11:53.721 EAL: Heap on socket 0 was expanded by 514MB 00:11:53.721 EAL: Calling mem event callback 'spdk:(nil)' 00:11:53.721 EAL: request: mp_malloc_sync 00:11:53.721 EAL: No shared files mode enabled, IPC is disabled 00:11:53.721 EAL: Heap on socket 0 was shrunk by 514MB 00:11:53.721 EAL: Trying to obtain current memory policy. 00:11:53.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:54.011 EAL: Restoring previous memory policy: 4 00:11:54.011 EAL: Calling mem event callback 'spdk:(nil)' 00:11:54.011 EAL: request: mp_malloc_sync 00:11:54.011 EAL: No shared files mode enabled, IPC is disabled 00:11:54.011 EAL: Heap on socket 0 was expanded by 1026MB 00:11:54.011 EAL: Calling mem event callback 'spdk:(nil)' 00:11:54.273 EAL: request: mp_malloc_sync 00:11:54.273 EAL: No shared files mode enabled, IPC is disabled 00:11:54.273 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:54.273 passed 00:11:54.273 00:11:54.273 Run Summary: Type Total Ran Passed Failed Inactive 00:11:54.273 suites 1 1 n/a 0 0 00:11:54.273 tests 2 2 2 0 0 00:11:54.273 asserts 497 497 497 0 n/a 00:11:54.273 00:11:54.273 Elapsed time = 0.938 seconds 00:11:54.273 EAL: Calling mem event callback 'spdk:(nil)' 00:11:54.273 EAL: request: mp_malloc_sync 00:11:54.273 EAL: No shared files mode enabled, IPC is disabled 00:11:54.273 EAL: Heap on socket 0 was shrunk by 2MB 00:11:54.273 EAL: No shared files mode enabled, IPC is disabled 00:11:54.273 EAL: No shared files mode enabled, IPC is disabled 00:11:54.273 EAL: No shared files mode enabled, IPC is disabled 00:11:54.273 00:11:54.273 real 0m1.047s 00:11:54.273 user 0m0.497s 00:11:54.273 sys 0m0.518s 00:11:54.273 02:17:44 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.273 02:17:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:54.273 ************************************ 00:11:54.273 END TEST env_vtophys 00:11:54.273 ************************************ 00:11:54.273 02:17:44 env -- common/autotest_common.sh@1142 -- # return 0 00:11:54.273 02:17:44 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:11:54.273 02:17:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:54.273 02:17:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.273 02:17:44 env -- common/autotest_common.sh@10 -- # set +x 00:11:54.273 ************************************ 00:11:54.273 START TEST env_pci 00:11:54.273 ************************************ 00:11:54.273 02:17:44 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:11:54.273 00:11:54.273 00:11:54.273 CUnit - A unit testing framework for C - Version 2.1-3 00:11:54.273 http://cunit.sourceforge.net/ 00:11:54.273 00:11:54.273 00:11:54.273 Suite: pci 00:11:54.273 Test: pci_hook ...[2024-07-11 02:17:44.568417] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1742865 has claimed it 00:11:54.273 EAL: Cannot find device (10000:00:01.0) 00:11:54.273 EAL: Failed to attach device on primary process 00:11:54.273 passed 00:11:54.273 00:11:54.273 Run Summary: Type Total Ran Passed Failed Inactive 00:11:54.273 suites 1 1 n/a 0 0 00:11:54.273 tests 1 1 1 0 0 00:11:54.273 asserts 25 25 25 0 n/a 00:11:54.273 00:11:54.273 Elapsed time = 0.017 seconds 00:11:54.273 00:11:54.273 real 0m0.029s 00:11:54.273 user 0m0.007s 00:11:54.273 sys 0m0.022s 00:11:54.273 02:17:44 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.273 02:17:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:54.273 ************************************ 00:11:54.273 END TEST env_pci 00:11:54.273 ************************************ 00:11:54.273 02:17:44 env -- common/autotest_common.sh@1142 -- # return 0 00:11:54.273 02:17:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:54.273 02:17:44 env -- env/env.sh@15 -- # uname 00:11:54.273 02:17:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:54.273 02:17:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:54.273 02:17:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:54.273 02:17:44 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:54.273 02:17:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.273 02:17:44 env -- common/autotest_common.sh@10 -- # set +x 00:11:54.273 ************************************ 00:11:54.273 START TEST env_dpdk_post_init 00:11:54.273 ************************************ 00:11:54.273 02:17:44 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:54.273 EAL: Detected CPU lcores: 32 00:11:54.273 EAL: Detected NUMA nodes: 2 00:11:54.273 EAL: Detected shared linkage of DPDK 00:11:54.273 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:54.273 EAL: Selected IOVA mode 'VA' 00:11:54.273 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.273 EAL: VFIO support initialized 00:11:54.273 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:54.532 EAL: Using IOMMU type 1 (Type 1) 00:11:54.532 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:11:54.532 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:11:54.532 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:11:54.532 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:11:54.532 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:11:54.532 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:11:54.533 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:11:55.470 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:11:58.751 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:11:58.751 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:11:58.751 Starting DPDK initialization... 00:11:58.751 Starting SPDK post initialization... 00:11:58.751 SPDK NVMe probe 00:11:58.751 Attaching to 0000:84:00.0 00:11:58.751 Attached to 0000:84:00.0 00:11:58.751 Cleaning up... 00:11:58.751 00:11:58.751 real 0m4.394s 00:11:58.751 user 0m3.269s 00:11:58.751 sys 0m0.191s 00:11:58.751 02:17:49 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.751 02:17:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:58.751 ************************************ 00:11:58.751 END TEST env_dpdk_post_init 00:11:58.751 ************************************ 00:11:58.751 02:17:49 env -- common/autotest_common.sh@1142 -- # return 0 00:11:58.751 02:17:49 env -- env/env.sh@26 -- # uname 00:11:58.751 02:17:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:58.751 02:17:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:11:58.751 02:17:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:58.751 02:17:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.751 02:17:49 env -- common/autotest_common.sh@10 -- # set +x 00:11:58.751 ************************************ 00:11:58.751 START TEST env_mem_callbacks 00:11:58.751 ************************************ 00:11:58.751 02:17:49 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:11:58.751 EAL: Detected CPU lcores: 32 00:11:58.751 EAL: Detected NUMA nodes: 2 00:11:58.751 EAL: Detected shared linkage of DPDK 00:11:58.751 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:58.751 EAL: Selected IOVA mode 'VA' 00:11:58.751 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.751 EAL: VFIO support initialized 00:11:58.751 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:58.751 00:11:58.751 00:11:58.751 CUnit - A unit testing framework for C - Version 2.1-3 00:11:58.751 http://cunit.sourceforge.net/ 00:11:58.751 00:11:58.751 00:11:58.751 Suite: memory 00:11:58.751 Test: test ... 00:11:58.751 register 0x200000200000 2097152 00:11:58.751 malloc 3145728 00:11:58.751 register 0x200000400000 4194304 00:11:58.751 buf 0x200000500000 len 3145728 PASSED 00:11:58.751 malloc 64 00:11:58.751 buf 0x2000004fff40 len 64 PASSED 00:11:58.751 malloc 4194304 00:11:58.751 register 0x200000800000 6291456 00:11:58.751 buf 0x200000a00000 len 4194304 PASSED 00:11:58.751 free 0x200000500000 3145728 00:11:58.751 free 0x2000004fff40 64 00:11:58.751 unregister 0x200000400000 4194304 PASSED 00:11:58.751 free 0x200000a00000 4194304 00:11:58.751 unregister 0x200000800000 6291456 PASSED 00:11:58.751 malloc 8388608 00:11:58.751 register 0x200000400000 10485760 00:11:58.751 buf 0x200000600000 len 8388608 PASSED 00:11:58.751 free 0x200000600000 8388608 00:11:58.751 unregister 0x200000400000 10485760 PASSED 00:11:58.751 passed 00:11:58.751 00:11:58.751 Run Summary: Type Total Ran Passed Failed Inactive 00:11:58.751 suites 1 1 n/a 0 0 00:11:58.751 tests 1 1 1 0 0 00:11:58.751 asserts 15 15 15 0 n/a 00:11:58.751 00:11:58.751 Elapsed time = 0.005 seconds 00:11:58.751 00:11:58.751 real 0m0.046s 00:11:58.751 user 0m0.018s 00:11:58.751 sys 0m0.027s 00:11:58.751 02:17:49 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.751 02:17:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:58.751 ************************************ 00:11:58.751 END TEST env_mem_callbacks 00:11:58.751 ************************************ 00:11:58.751 02:17:49 env -- common/autotest_common.sh@1142 -- # return 0 00:11:58.751 00:11:58.751 real 0m6.062s 00:11:58.751 user 0m4.136s 00:11:58.751 sys 0m0.980s 00:11:58.751 02:17:49 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.751 02:17:49 env -- common/autotest_common.sh@10 -- # set +x 00:11:58.751 ************************************ 00:11:58.751 END TEST env 00:11:58.751 ************************************ 00:11:59.009 02:17:49 -- common/autotest_common.sh@1142 -- # return 0 00:11:59.009 02:17:49 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:11:59.009 02:17:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:59.009 02:17:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.009 02:17:49 -- common/autotest_common.sh@10 -- # set +x 00:11:59.009 ************************************ 00:11:59.009 START TEST rpc 00:11:59.010 ************************************ 00:11:59.010 02:17:49 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:11:59.010 * Looking for test storage... 00:11:59.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:11:59.010 02:17:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1743397 00:11:59.010 02:17:49 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:11:59.010 02:17:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:59.010 02:17:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1743397 00:11:59.010 02:17:49 rpc -- common/autotest_common.sh@829 -- # '[' -z 1743397 ']' 00:11:59.010 02:17:49 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.010 02:17:49 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.010 02:17:49 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.010 02:17:49 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.010 02:17:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.010 [2024-07-11 02:17:49.333609] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:11:59.010 [2024-07-11 02:17:49.333705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743397 ] 00:11:59.010 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.010 [2024-07-11 02:17:49.393058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.267 [2024-07-11 02:17:49.480531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:59.268 [2024-07-11 02:17:49.480593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1743397' to capture a snapshot of events at runtime. 00:11:59.268 [2024-07-11 02:17:49.480610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.268 [2024-07-11 02:17:49.480623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.268 [2024-07-11 02:17:49.480636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1743397 for offline analysis/debug. 00:11:59.268 [2024-07-11 02:17:49.480667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.526 02:17:49 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.526 02:17:49 rpc -- common/autotest_common.sh@862 -- # return 0 00:11:59.526 02:17:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:11:59.526 02:17:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:11:59.526 02:17:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:59.526 02:17:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:59.526 02:17:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:59.526 02:17:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.526 02:17:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 ************************************ 00:11:59.526 START TEST rpc_integrity 00:11:59.526 ************************************ 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:59.526 { 00:11:59.526 "name": "Malloc0", 00:11:59.526 "aliases": [ 00:11:59.526 "6d8a582c-dd75-4615-8e1a-f2c3a78be30f" 00:11:59.526 ], 00:11:59.526 "product_name": "Malloc disk", 00:11:59.526 "block_size": 512, 00:11:59.526 "num_blocks": 16384, 00:11:59.526 "uuid": "6d8a582c-dd75-4615-8e1a-f2c3a78be30f", 00:11:59.526 "assigned_rate_limits": { 00:11:59.526 "rw_ios_per_sec": 0, 00:11:59.526 "rw_mbytes_per_sec": 0, 00:11:59.526 "r_mbytes_per_sec": 0, 00:11:59.526 "w_mbytes_per_sec": 0 00:11:59.526 }, 00:11:59.526 "claimed": false, 00:11:59.526 "zoned": false, 00:11:59.526 "supported_io_types": { 00:11:59.526 "read": true, 00:11:59.526 "write": true, 00:11:59.526 "unmap": true, 00:11:59.526 "flush": true, 00:11:59.526 "reset": true, 00:11:59.526 "nvme_admin": false, 00:11:59.526 "nvme_io": false, 00:11:59.526 "nvme_io_md": false, 00:11:59.526 "write_zeroes": true, 00:11:59.526 "zcopy": true, 00:11:59.526 "get_zone_info": false, 00:11:59.526 "zone_management": false, 00:11:59.526 "zone_append": false, 00:11:59.526 "compare": false, 00:11:59.526 "compare_and_write": false, 00:11:59.526 "abort": true, 00:11:59.526 "seek_hole": false, 00:11:59.526 "seek_data": false, 00:11:59.526 "copy": true, 00:11:59.526 "nvme_iov_md": false 00:11:59.526 }, 00:11:59.526 "memory_domains": [ 00:11:59.526 { 00:11:59.526 "dma_device_id": "system", 00:11:59.526 "dma_device_type": 1 00:11:59.526 }, 00:11:59.526 { 00:11:59.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.526 "dma_device_type": 2 00:11:59.526 } 00:11:59.526 ], 00:11:59.526 "driver_specific": {} 00:11:59.526 } 00:11:59.526 ]' 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:59.526 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.526 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 [2024-07-11 02:17:49.848244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:59.526 [2024-07-11 02:17:49.848291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.526 [2024-07-11 02:17:49.848316] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c4b830 00:11:59.526 [2024-07-11 02:17:49.848332] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.526 [2024-07-11 02:17:49.849853] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.526 [2024-07-11 02:17:49.849879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:59.526 Passthru0 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.527 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.527 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:59.527 { 00:11:59.527 "name": "Malloc0", 00:11:59.527 "aliases": [ 00:11:59.527 "6d8a582c-dd75-4615-8e1a-f2c3a78be30f" 00:11:59.527 ], 00:11:59.527 "product_name": "Malloc disk", 00:11:59.527 "block_size": 512, 00:11:59.527 "num_blocks": 16384, 00:11:59.527 "uuid": "6d8a582c-dd75-4615-8e1a-f2c3a78be30f", 00:11:59.527 "assigned_rate_limits": { 00:11:59.527 "rw_ios_per_sec": 0, 00:11:59.527 "rw_mbytes_per_sec": 0, 00:11:59.527 "r_mbytes_per_sec": 0, 00:11:59.527 "w_mbytes_per_sec": 0 00:11:59.527 }, 00:11:59.527 "claimed": true, 00:11:59.527 "claim_type": "exclusive_write", 00:11:59.527 "zoned": false, 00:11:59.527 "supported_io_types": { 00:11:59.527 "read": true, 00:11:59.527 "write": true, 00:11:59.527 "unmap": true, 00:11:59.527 "flush": true, 00:11:59.527 "reset": true, 00:11:59.527 "nvme_admin": false, 00:11:59.527 "nvme_io": false, 00:11:59.527 "nvme_io_md": false, 00:11:59.527 "write_zeroes": true, 00:11:59.527 "zcopy": true, 00:11:59.527 "get_zone_info": false, 00:11:59.527 "zone_management": false, 00:11:59.527 "zone_append": false, 00:11:59.527 "compare": false, 00:11:59.527 "compare_and_write": false, 00:11:59.527 "abort": true, 00:11:59.527 "seek_hole": false, 00:11:59.527 "seek_data": false, 00:11:59.527 "copy": true, 00:11:59.527 "nvme_iov_md": false 00:11:59.527 }, 00:11:59.527 "memory_domains": [ 00:11:59.527 { 00:11:59.527 "dma_device_id": "system", 00:11:59.527 "dma_device_type": 1 00:11:59.527 }, 00:11:59.527 { 00:11:59.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.527 "dma_device_type": 2 00:11:59.527 } 00:11:59.527 ], 00:11:59.527 "driver_specific": {} 00:11:59.527 }, 00:11:59.527 { 00:11:59.527 "name": "Passthru0", 00:11:59.527 "aliases": [ 00:11:59.527 "ac79dc35-20f4-5462-bbe8-3abc093eec8a" 00:11:59.527 ], 00:11:59.527 "product_name": "passthru", 00:11:59.527 "block_size": 512, 00:11:59.527 "num_blocks": 16384, 00:11:59.527 "uuid": "ac79dc35-20f4-5462-bbe8-3abc093eec8a", 00:11:59.527 "assigned_rate_limits": { 00:11:59.527 "rw_ios_per_sec": 0, 00:11:59.527 "rw_mbytes_per_sec": 0, 00:11:59.527 "r_mbytes_per_sec": 0, 00:11:59.527 "w_mbytes_per_sec": 0 00:11:59.527 }, 00:11:59.527 "claimed": false, 00:11:59.527 "zoned": false, 00:11:59.527 "supported_io_types": { 00:11:59.527 "read": true, 00:11:59.527 "write": true, 00:11:59.527 "unmap": true, 00:11:59.527 "flush": true, 00:11:59.527 "reset": true, 00:11:59.527 "nvme_admin": false, 00:11:59.527 "nvme_io": false, 00:11:59.527 "nvme_io_md": false, 00:11:59.527 "write_zeroes": true, 00:11:59.527 "zcopy": true, 00:11:59.527 "get_zone_info": false, 00:11:59.527 "zone_management": false, 00:11:59.527 "zone_append": false, 00:11:59.527 "compare": false, 00:11:59.527 "compare_and_write": false, 00:11:59.527 "abort": true, 00:11:59.527 "seek_hole": false, 00:11:59.527 "seek_data": false, 00:11:59.527 "copy": true, 00:11:59.527 "nvme_iov_md": false 00:11:59.527 }, 00:11:59.527 "memory_domains": [ 00:11:59.527 { 00:11:59.527 "dma_device_id": "system", 00:11:59.527 "dma_device_type": 1 00:11:59.527 }, 00:11:59.527 { 00:11:59.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.527 "dma_device_type": 2 00:11:59.527 } 00:11:59.527 ], 00:11:59.527 "driver_specific": { 00:11:59.527 "passthru": { 00:11:59.527 "name": "Passthru0", 00:11:59.527 "base_bdev_name": "Malloc0" 00:11:59.527 } 00:11:59.527 } 00:11:59.527 } 00:11:59.527 ]' 00:11:59.527 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:59.527 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:59.527 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.527 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.527 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:59.527 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.527 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:59.527 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:59.786 02:17:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:59.786 00:11:59.786 real 0m0.259s 00:11:59.786 user 0m0.165s 00:11:59.786 sys 0m0.030s 00:11:59.786 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:59.786 02:17:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:59.786 ************************************ 00:11:59.786 END TEST rpc_integrity 00:11:59.786 ************************************ 00:11:59.786 02:17:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:11:59.786 02:17:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:59.786 02:17:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:59.786 02:17:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.786 02:17:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.786 ************************************ 00:11:59.786 START TEST rpc_plugins 00:11:59.786 ************************************ 00:11:59.786 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:11:59.786 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:59.786 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.786 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:59.786 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.786 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:59.786 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:59.786 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.786 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:59.786 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.786 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:59.786 { 00:11:59.786 "name": "Malloc1", 00:11:59.786 "aliases": [ 00:11:59.786 "4a2ca01d-89df-4944-a5d8-e2dc3655bb60" 00:11:59.786 ], 00:11:59.786 "product_name": "Malloc disk", 00:11:59.786 "block_size": 4096, 00:11:59.786 "num_blocks": 256, 00:11:59.786 "uuid": "4a2ca01d-89df-4944-a5d8-e2dc3655bb60", 00:11:59.786 "assigned_rate_limits": { 00:11:59.786 "rw_ios_per_sec": 0, 00:11:59.786 "rw_mbytes_per_sec": 0, 00:11:59.786 "r_mbytes_per_sec": 0, 00:11:59.786 "w_mbytes_per_sec": 0 00:11:59.786 }, 00:11:59.786 "claimed": false, 00:11:59.786 "zoned": false, 00:11:59.786 "supported_io_types": { 00:11:59.786 "read": true, 00:11:59.786 "write": true, 00:11:59.786 "unmap": true, 00:11:59.786 "flush": true, 00:11:59.786 "reset": true, 00:11:59.786 "nvme_admin": false, 00:11:59.786 "nvme_io": false, 00:11:59.786 "nvme_io_md": false, 00:11:59.786 "write_zeroes": true, 00:11:59.786 "zcopy": true, 00:11:59.786 "get_zone_info": false, 00:11:59.786 "zone_management": false, 00:11:59.786 "zone_append": false, 00:11:59.786 "compare": false, 00:11:59.786 "compare_and_write": false, 00:11:59.786 "abort": true, 00:11:59.786 "seek_hole": false, 00:11:59.786 "seek_data": false, 00:11:59.786 "copy": true, 00:11:59.786 "nvme_iov_md": false 00:11:59.786 }, 00:11:59.786 "memory_domains": [ 00:11:59.786 { 00:11:59.786 "dma_device_id": "system", 00:11:59.786 "dma_device_type": 1 00:11:59.786 }, 00:11:59.786 { 00:11:59.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.786 "dma_device_type": 2 00:11:59.786 } 00:11:59.786 ], 00:11:59.786 "driver_specific": {} 00:11:59.786 } 00:11:59.786 ]' 00:11:59.786 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:59.786 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:59.786 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:59.787 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.787 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:59.787 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.787 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:59.787 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.787 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:59.787 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.787 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:59.787 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:59.787 02:17:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:59.787 00:11:59.787 real 0m0.127s 00:11:59.787 user 0m0.082s 00:11:59.787 sys 0m0.013s 00:11:59.787 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:59.787 02:17:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:59.787 ************************************ 00:11:59.787 END TEST rpc_plugins 00:11:59.787 ************************************ 00:11:59.787 02:17:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:11:59.787 02:17:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:59.787 02:17:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:59.787 02:17:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.787 02:17:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.787 ************************************ 00:11:59.787 START TEST rpc_trace_cmd_test 00:11:59.787 ************************************ 00:11:59.787 02:17:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:11:59.787 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:59.787 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:59.787 02:17:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.787 02:17:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:12:00.045 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1743397", 00:12:00.045 "tpoint_group_mask": "0x8", 00:12:00.045 "iscsi_conn": { 00:12:00.045 "mask": "0x2", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "scsi": { 00:12:00.045 "mask": "0x4", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "bdev": { 00:12:00.045 "mask": "0x8", 00:12:00.045 "tpoint_mask": "0xffffffffffffffff" 00:12:00.045 }, 00:12:00.045 "nvmf_rdma": { 00:12:00.045 "mask": "0x10", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "nvmf_tcp": { 00:12:00.045 "mask": "0x20", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "ftl": { 00:12:00.045 "mask": "0x40", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "blobfs": { 00:12:00.045 "mask": "0x80", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "dsa": { 00:12:00.045 "mask": "0x200", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "thread": { 00:12:00.045 "mask": "0x400", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "nvme_pcie": { 00:12:00.045 "mask": "0x800", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "iaa": { 00:12:00.045 "mask": "0x1000", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "nvme_tcp": { 00:12:00.045 "mask": "0x2000", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "bdev_nvme": { 00:12:00.045 "mask": "0x4000", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 }, 00:12:00.045 "sock": { 00:12:00.045 "mask": "0x8000", 00:12:00.045 "tpoint_mask": "0x0" 00:12:00.045 } 00:12:00.045 }' 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:12:00.045 00:12:00.045 real 0m0.212s 00:12:00.045 user 0m0.190s 00:12:00.045 sys 0m0.016s 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.045 02:17:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.045 ************************************ 00:12:00.045 END TEST rpc_trace_cmd_test 00:12:00.045 ************************************ 00:12:00.045 02:17:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:00.045 02:17:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:12:00.045 02:17:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:12:00.045 02:17:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:12:00.045 02:17:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:00.045 02:17:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.045 02:17:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.045 ************************************ 00:12:00.045 START TEST rpc_daemon_integrity 00:12:00.045 ************************************ 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.304 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:12:00.304 { 00:12:00.304 "name": "Malloc2", 00:12:00.304 "aliases": [ 00:12:00.304 "4c98fe06-2202-40a8-b114-dd0f890fae0e" 00:12:00.304 ], 00:12:00.304 "product_name": "Malloc disk", 00:12:00.304 "block_size": 512, 00:12:00.304 "num_blocks": 16384, 00:12:00.304 "uuid": "4c98fe06-2202-40a8-b114-dd0f890fae0e", 00:12:00.304 "assigned_rate_limits": { 00:12:00.304 "rw_ios_per_sec": 0, 00:12:00.304 "rw_mbytes_per_sec": 0, 00:12:00.304 "r_mbytes_per_sec": 0, 00:12:00.304 "w_mbytes_per_sec": 0 00:12:00.304 }, 00:12:00.304 "claimed": false, 00:12:00.304 "zoned": false, 00:12:00.304 "supported_io_types": { 00:12:00.305 "read": true, 00:12:00.305 "write": true, 00:12:00.305 "unmap": true, 00:12:00.305 "flush": true, 00:12:00.305 "reset": true, 00:12:00.305 "nvme_admin": false, 00:12:00.305 "nvme_io": false, 00:12:00.305 "nvme_io_md": false, 00:12:00.305 "write_zeroes": true, 00:12:00.305 "zcopy": true, 00:12:00.305 "get_zone_info": false, 00:12:00.305 "zone_management": false, 00:12:00.305 "zone_append": false, 00:12:00.305 "compare": false, 00:12:00.305 "compare_and_write": false, 00:12:00.305 "abort": true, 00:12:00.305 "seek_hole": false, 00:12:00.305 "seek_data": false, 00:12:00.305 "copy": true, 00:12:00.305 "nvme_iov_md": false 00:12:00.305 }, 00:12:00.305 "memory_domains": [ 00:12:00.305 { 00:12:00.305 "dma_device_id": "system", 00:12:00.305 "dma_device_type": 1 00:12:00.305 }, 00:12:00.305 { 00:12:00.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.305 "dma_device_type": 2 00:12:00.305 } 00:12:00.305 ], 00:12:00.305 "driver_specific": {} 00:12:00.305 } 00:12:00.305 ]' 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 [2024-07-11 02:17:50.586395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:12:00.305 [2024-07-11 02:17:50.586450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.305 [2024-07-11 02:17:50.586475] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a9aa70 00:12:00.305 [2024-07-11 02:17:50.586490] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.305 [2024-07-11 02:17:50.587872] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.305 [2024-07-11 02:17:50.587901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:12:00.305 Passthru0 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:12:00.305 { 00:12:00.305 "name": "Malloc2", 00:12:00.305 "aliases": [ 00:12:00.305 "4c98fe06-2202-40a8-b114-dd0f890fae0e" 00:12:00.305 ], 00:12:00.305 "product_name": "Malloc disk", 00:12:00.305 "block_size": 512, 00:12:00.305 "num_blocks": 16384, 00:12:00.305 "uuid": "4c98fe06-2202-40a8-b114-dd0f890fae0e", 00:12:00.305 "assigned_rate_limits": { 00:12:00.305 "rw_ios_per_sec": 0, 00:12:00.305 "rw_mbytes_per_sec": 0, 00:12:00.305 "r_mbytes_per_sec": 0, 00:12:00.305 "w_mbytes_per_sec": 0 00:12:00.305 }, 00:12:00.305 "claimed": true, 00:12:00.305 "claim_type": "exclusive_write", 00:12:00.305 "zoned": false, 00:12:00.305 "supported_io_types": { 00:12:00.305 "read": true, 00:12:00.305 "write": true, 00:12:00.305 "unmap": true, 00:12:00.305 "flush": true, 00:12:00.305 "reset": true, 00:12:00.305 "nvme_admin": false, 00:12:00.305 "nvme_io": false, 00:12:00.305 "nvme_io_md": false, 00:12:00.305 "write_zeroes": true, 00:12:00.305 "zcopy": true, 00:12:00.305 "get_zone_info": false, 00:12:00.305 "zone_management": false, 00:12:00.305 "zone_append": false, 00:12:00.305 "compare": false, 00:12:00.305 "compare_and_write": false, 00:12:00.305 "abort": true, 00:12:00.305 "seek_hole": false, 00:12:00.305 "seek_data": false, 00:12:00.305 "copy": true, 00:12:00.305 "nvme_iov_md": false 00:12:00.305 }, 00:12:00.305 "memory_domains": [ 00:12:00.305 { 00:12:00.305 "dma_device_id": "system", 00:12:00.305 "dma_device_type": 1 00:12:00.305 }, 00:12:00.305 { 00:12:00.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.305 "dma_device_type": 2 00:12:00.305 } 00:12:00.305 ], 00:12:00.305 "driver_specific": {} 00:12:00.305 }, 00:12:00.305 { 00:12:00.305 "name": "Passthru0", 00:12:00.305 "aliases": [ 00:12:00.305 "825b1801-d8a8-5eca-9db1-73b67738a3a8" 00:12:00.305 ], 00:12:00.305 "product_name": "passthru", 00:12:00.305 "block_size": 512, 00:12:00.305 "num_blocks": 16384, 00:12:00.305 "uuid": "825b1801-d8a8-5eca-9db1-73b67738a3a8", 00:12:00.305 "assigned_rate_limits": { 00:12:00.305 "rw_ios_per_sec": 0, 00:12:00.305 "rw_mbytes_per_sec": 0, 00:12:00.305 "r_mbytes_per_sec": 0, 00:12:00.305 "w_mbytes_per_sec": 0 00:12:00.305 }, 00:12:00.305 "claimed": false, 00:12:00.305 "zoned": false, 00:12:00.305 "supported_io_types": { 00:12:00.305 "read": true, 00:12:00.305 "write": true, 00:12:00.305 "unmap": true, 00:12:00.305 "flush": true, 00:12:00.305 "reset": true, 00:12:00.305 "nvme_admin": false, 00:12:00.305 "nvme_io": false, 00:12:00.305 "nvme_io_md": false, 00:12:00.305 "write_zeroes": true, 00:12:00.305 "zcopy": true, 00:12:00.305 "get_zone_info": false, 00:12:00.305 "zone_management": false, 00:12:00.305 "zone_append": false, 00:12:00.305 "compare": false, 00:12:00.305 "compare_and_write": false, 00:12:00.305 "abort": true, 00:12:00.305 "seek_hole": false, 00:12:00.305 "seek_data": false, 00:12:00.305 "copy": true, 00:12:00.305 "nvme_iov_md": false 00:12:00.305 }, 00:12:00.305 "memory_domains": [ 00:12:00.305 { 00:12:00.305 "dma_device_id": "system", 00:12:00.305 "dma_device_type": 1 00:12:00.305 }, 00:12:00.305 { 00:12:00.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.305 "dma_device_type": 2 00:12:00.305 } 00:12:00.305 ], 00:12:00.305 "driver_specific": { 00:12:00.305 "passthru": { 00:12:00.305 "name": "Passthru0", 00:12:00.305 "base_bdev_name": "Malloc2" 00:12:00.305 } 00:12:00.305 } 00:12:00.305 } 00:12:00.305 ]' 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:12:00.305 00:12:00.305 real 0m0.251s 00:12:00.305 user 0m0.163s 00:12:00.305 sys 0m0.029s 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.305 02:17:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:00.305 ************************************ 00:12:00.305 END TEST rpc_daemon_integrity 00:12:00.305 ************************************ 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:00.564 02:17:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:12:00.564 02:17:50 rpc -- rpc/rpc.sh@84 -- # killprocess 1743397 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@948 -- # '[' -z 1743397 ']' 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@952 -- # kill -0 1743397 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@953 -- # uname 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743397 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743397' 00:12:00.564 killing process with pid 1743397 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@967 -- # kill 1743397 00:12:00.564 02:17:50 rpc -- common/autotest_common.sh@972 -- # wait 1743397 00:12:00.822 00:12:00.822 real 0m1.808s 00:12:00.822 user 0m2.428s 00:12:00.822 sys 0m0.561s 00:12:00.822 02:17:51 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.822 02:17:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 ************************************ 00:12:00.822 END TEST rpc 00:12:00.822 ************************************ 00:12:00.822 02:17:51 -- common/autotest_common.sh@1142 -- # return 0 00:12:00.822 02:17:51 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:12:00.822 02:17:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:00.822 02:17:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.822 02:17:51 -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 ************************************ 00:12:00.822 START TEST skip_rpc 00:12:00.822 ************************************ 00:12:00.822 02:17:51 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:12:00.822 * Looking for test storage... 00:12:00.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:12:00.822 02:17:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:12:00.822 02:17:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:12:00.822 02:17:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:12:00.822 02:17:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:00.822 02:17:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.822 02:17:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.822 ************************************ 00:12:00.822 START TEST skip_rpc 00:12:00.822 ************************************ 00:12:00.822 02:17:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:12:00.822 02:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1743770 00:12:00.822 02:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:12:00.822 02:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:00.822 02:17:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:12:00.822 [2024-07-11 02:17:51.221527] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:00.822 [2024-07-11 02:17:51.221632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743770 ] 00:12:01.081 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.081 [2024-07-11 02:17:51.285768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.081 [2024-07-11 02:17:51.376507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1743770 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1743770 ']' 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1743770 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743770 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743770' 00:12:06.344 killing process with pid 1743770 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1743770 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1743770 00:12:06.344 00:12:06.344 real 0m5.305s 00:12:06.344 user 0m5.007s 00:12:06.344 sys 0m0.288s 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.344 02:17:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.344 ************************************ 00:12:06.344 END TEST skip_rpc 00:12:06.344 ************************************ 00:12:06.344 02:17:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:06.344 02:17:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:12:06.344 02:17:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:06.344 02:17:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.344 02:17:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.344 ************************************ 00:12:06.344 START TEST skip_rpc_with_json 00:12:06.344 ************************************ 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1744292 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1744292 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1744292 ']' 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.344 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:06.344 [2024-07-11 02:17:56.579353] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:06.344 [2024-07-11 02:17:56.579445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744292 ] 00:12:06.344 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.344 [2024-07-11 02:17:56.638112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.344 [2024-07-11 02:17:56.725342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:06.603 [2024-07-11 02:17:56.948530] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:12:06.603 request: 00:12:06.603 { 00:12:06.603 "trtype": "tcp", 00:12:06.603 "method": "nvmf_get_transports", 00:12:06.603 "req_id": 1 00:12:06.603 } 00:12:06.603 Got JSON-RPC error response 00:12:06.603 response: 00:12:06.603 { 00:12:06.603 "code": -19, 00:12:06.603 "message": "No such device" 00:12:06.603 } 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:06.603 [2024-07-11 02:17:56.956667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.603 02:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:06.862 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.862 02:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:12:06.862 { 00:12:06.862 "subsystems": [ 00:12:06.862 { 00:12:06.862 "subsystem": "vfio_user_target", 00:12:06.862 "config": null 00:12:06.862 }, 00:12:06.862 { 00:12:06.862 "subsystem": "keyring", 00:12:06.862 "config": [] 00:12:06.862 }, 00:12:06.862 { 00:12:06.862 "subsystem": "iobuf", 00:12:06.862 "config": [ 00:12:06.862 { 00:12:06.862 "method": "iobuf_set_options", 00:12:06.862 "params": { 00:12:06.862 "small_pool_count": 8192, 00:12:06.862 "large_pool_count": 1024, 00:12:06.862 "small_bufsize": 8192, 00:12:06.862 "large_bufsize": 135168 00:12:06.862 } 00:12:06.862 } 00:12:06.862 ] 00:12:06.862 }, 00:12:06.862 { 00:12:06.862 "subsystem": "sock", 00:12:06.862 "config": [ 00:12:06.862 { 00:12:06.862 "method": "sock_set_default_impl", 00:12:06.863 "params": { 00:12:06.863 "impl_name": "posix" 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "sock_impl_set_options", 00:12:06.863 "params": { 00:12:06.863 "impl_name": "ssl", 00:12:06.863 "recv_buf_size": 4096, 00:12:06.863 "send_buf_size": 4096, 00:12:06.863 "enable_recv_pipe": true, 00:12:06.863 "enable_quickack": false, 00:12:06.863 "enable_placement_id": 0, 00:12:06.863 "enable_zerocopy_send_server": true, 00:12:06.863 "enable_zerocopy_send_client": false, 00:12:06.863 "zerocopy_threshold": 0, 00:12:06.863 "tls_version": 0, 00:12:06.863 "enable_ktls": false 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "sock_impl_set_options", 00:12:06.863 "params": { 00:12:06.863 "impl_name": "posix", 00:12:06.863 "recv_buf_size": 2097152, 00:12:06.863 "send_buf_size": 2097152, 00:12:06.863 "enable_recv_pipe": true, 00:12:06.863 "enable_quickack": false, 00:12:06.863 "enable_placement_id": 0, 00:12:06.863 "enable_zerocopy_send_server": true, 00:12:06.863 "enable_zerocopy_send_client": false, 00:12:06.863 "zerocopy_threshold": 0, 00:12:06.863 "tls_version": 0, 00:12:06.863 "enable_ktls": false 00:12:06.863 } 00:12:06.863 } 00:12:06.863 ] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "vmd", 00:12:06.863 "config": [] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "accel", 00:12:06.863 "config": [ 00:12:06.863 { 00:12:06.863 "method": "accel_set_options", 00:12:06.863 "params": { 00:12:06.863 "small_cache_size": 128, 00:12:06.863 "large_cache_size": 16, 00:12:06.863 "task_count": 2048, 00:12:06.863 "sequence_count": 2048, 00:12:06.863 "buf_count": 2048 00:12:06.863 } 00:12:06.863 } 00:12:06.863 ] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "bdev", 00:12:06.863 "config": [ 00:12:06.863 { 00:12:06.863 "method": "bdev_set_options", 00:12:06.863 "params": { 00:12:06.863 "bdev_io_pool_size": 65535, 00:12:06.863 "bdev_io_cache_size": 256, 00:12:06.863 "bdev_auto_examine": true, 00:12:06.863 "iobuf_small_cache_size": 128, 00:12:06.863 "iobuf_large_cache_size": 16 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "bdev_raid_set_options", 00:12:06.863 "params": { 00:12:06.863 "process_window_size_kb": 1024 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "bdev_iscsi_set_options", 00:12:06.863 "params": { 00:12:06.863 "timeout_sec": 30 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "bdev_nvme_set_options", 00:12:06.863 "params": { 00:12:06.863 "action_on_timeout": "none", 00:12:06.863 "timeout_us": 0, 00:12:06.863 "timeout_admin_us": 0, 00:12:06.863 "keep_alive_timeout_ms": 10000, 00:12:06.863 "arbitration_burst": 0, 00:12:06.863 "low_priority_weight": 0, 00:12:06.863 "medium_priority_weight": 0, 00:12:06.863 "high_priority_weight": 0, 00:12:06.863 "nvme_adminq_poll_period_us": 10000, 00:12:06.863 "nvme_ioq_poll_period_us": 0, 00:12:06.863 "io_queue_requests": 0, 00:12:06.863 "delay_cmd_submit": true, 00:12:06.863 "transport_retry_count": 4, 00:12:06.863 "bdev_retry_count": 3, 00:12:06.863 "transport_ack_timeout": 0, 00:12:06.863 "ctrlr_loss_timeout_sec": 0, 00:12:06.863 "reconnect_delay_sec": 0, 00:12:06.863 "fast_io_fail_timeout_sec": 0, 00:12:06.863 "disable_auto_failback": false, 00:12:06.863 "generate_uuids": false, 00:12:06.863 "transport_tos": 0, 00:12:06.863 "nvme_error_stat": false, 00:12:06.863 "rdma_srq_size": 0, 00:12:06.863 "io_path_stat": false, 00:12:06.863 "allow_accel_sequence": false, 00:12:06.863 "rdma_max_cq_size": 0, 00:12:06.863 "rdma_cm_event_timeout_ms": 0, 00:12:06.863 "dhchap_digests": [ 00:12:06.863 "sha256", 00:12:06.863 "sha384", 00:12:06.863 "sha512" 00:12:06.863 ], 00:12:06.863 "dhchap_dhgroups": [ 00:12:06.863 "null", 00:12:06.863 "ffdhe2048", 00:12:06.863 "ffdhe3072", 00:12:06.863 "ffdhe4096", 00:12:06.863 "ffdhe6144", 00:12:06.863 "ffdhe8192" 00:12:06.863 ] 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "bdev_nvme_set_hotplug", 00:12:06.863 "params": { 00:12:06.863 "period_us": 100000, 00:12:06.863 "enable": false 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "bdev_wait_for_examine" 00:12:06.863 } 00:12:06.863 ] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "scsi", 00:12:06.863 "config": null 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "scheduler", 00:12:06.863 "config": [ 00:12:06.863 { 00:12:06.863 "method": "framework_set_scheduler", 00:12:06.863 "params": { 00:12:06.863 "name": "static" 00:12:06.863 } 00:12:06.863 } 00:12:06.863 ] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "vhost_scsi", 00:12:06.863 "config": [] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "vhost_blk", 00:12:06.863 "config": [] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "ublk", 00:12:06.863 "config": [] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "nbd", 00:12:06.863 "config": [] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "nvmf", 00:12:06.863 "config": [ 00:12:06.863 { 00:12:06.863 "method": "nvmf_set_config", 00:12:06.863 "params": { 00:12:06.863 "discovery_filter": "match_any", 00:12:06.863 "admin_cmd_passthru": { 00:12:06.863 "identify_ctrlr": false 00:12:06.863 } 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "nvmf_set_max_subsystems", 00:12:06.863 "params": { 00:12:06.863 "max_subsystems": 1024 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "nvmf_set_crdt", 00:12:06.863 "params": { 00:12:06.863 "crdt1": 0, 00:12:06.863 "crdt2": 0, 00:12:06.863 "crdt3": 0 00:12:06.863 } 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "method": "nvmf_create_transport", 00:12:06.863 "params": { 00:12:06.863 "trtype": "TCP", 00:12:06.863 "max_queue_depth": 128, 00:12:06.863 "max_io_qpairs_per_ctrlr": 127, 00:12:06.863 "in_capsule_data_size": 4096, 00:12:06.863 "max_io_size": 131072, 00:12:06.863 "io_unit_size": 131072, 00:12:06.863 "max_aq_depth": 128, 00:12:06.863 "num_shared_buffers": 511, 00:12:06.863 "buf_cache_size": 4294967295, 00:12:06.863 "dif_insert_or_strip": false, 00:12:06.863 "zcopy": false, 00:12:06.863 "c2h_success": true, 00:12:06.863 "sock_priority": 0, 00:12:06.863 "abort_timeout_sec": 1, 00:12:06.863 "ack_timeout": 0, 00:12:06.863 "data_wr_pool_size": 0 00:12:06.863 } 00:12:06.863 } 00:12:06.863 ] 00:12:06.863 }, 00:12:06.863 { 00:12:06.863 "subsystem": "iscsi", 00:12:06.863 "config": [ 00:12:06.863 { 00:12:06.863 "method": "iscsi_set_options", 00:12:06.863 "params": { 00:12:06.863 "node_base": "iqn.2016-06.io.spdk", 00:12:06.863 "max_sessions": 128, 00:12:06.863 "max_connections_per_session": 2, 00:12:06.863 "max_queue_depth": 64, 00:12:06.863 "default_time2wait": 2, 00:12:06.863 "default_time2retain": 20, 00:12:06.863 "first_burst_length": 8192, 00:12:06.863 "immediate_data": true, 00:12:06.863 "allow_duplicated_isid": false, 00:12:06.863 "error_recovery_level": 0, 00:12:06.863 "nop_timeout": 60, 00:12:06.863 "nop_in_interval": 30, 00:12:06.863 "disable_chap": false, 00:12:06.863 "require_chap": false, 00:12:06.863 "mutual_chap": false, 00:12:06.863 "chap_group": 0, 00:12:06.863 "max_large_datain_per_connection": 64, 00:12:06.863 "max_r2t_per_connection": 4, 00:12:06.863 "pdu_pool_size": 36864, 00:12:06.863 "immediate_data_pool_size": 16384, 00:12:06.863 "data_out_pool_size": 2048 00:12:06.863 } 00:12:06.863 } 00:12:06.863 ] 00:12:06.863 } 00:12:06.863 ] 00:12:06.863 } 00:12:06.863 02:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:06.863 02:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1744292 00:12:06.863 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1744292 ']' 00:12:06.863 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1744292 00:12:06.863 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:12:06.863 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.863 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744292 00:12:06.864 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.864 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.864 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744292' 00:12:06.864 killing process with pid 1744292 00:12:06.864 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1744292 00:12:06.864 02:17:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1744292 00:12:07.122 02:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1744379 00:12:07.122 02:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:12:07.122 02:17:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1744379 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1744379 ']' 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1744379 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1744379 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1744379' 00:12:12.386 killing process with pid 1744379 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1744379 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1744379 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:12:12.386 02:18:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:12:12.386 00:12:12.386 real 0m6.186s 00:12:12.386 user 0m5.882s 00:12:12.387 sys 0m0.611s 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:12.387 ************************************ 00:12:12.387 END TEST skip_rpc_with_json 00:12:12.387 ************************************ 00:12:12.387 02:18:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:12.387 02:18:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:12:12.387 02:18:02 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:12.387 02:18:02 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.387 02:18:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.387 ************************************ 00:12:12.387 START TEST skip_rpc_with_delay 00:12:12.387 ************************************ 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:12:12.387 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:12.645 [2024-07-11 02:18:02.828794] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:12:12.645 [2024-07-11 02:18:02.828934] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:12:12.645 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:12:12.645 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:12.645 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:12.645 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:12.645 00:12:12.645 real 0m0.082s 00:12:12.645 user 0m0.052s 00:12:12.645 sys 0m0.029s 00:12:12.645 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:12.645 02:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:12:12.645 ************************************ 00:12:12.645 END TEST skip_rpc_with_delay 00:12:12.645 ************************************ 00:12:12.645 02:18:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:12.645 02:18:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:12:12.645 02:18:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:12:12.645 02:18:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:12:12.645 02:18:02 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:12.645 02:18:02 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.645 02:18:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.645 ************************************ 00:12:12.645 START TEST exit_on_failed_rpc_init 00:12:12.645 ************************************ 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1745025 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1745025 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1745025 ']' 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.645 02:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:12.645 [2024-07-11 02:18:02.956595] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:12.645 [2024-07-11 02:18:02.956701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745025 ] 00:12:12.645 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.645 [2024-07-11 02:18:03.017244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.903 [2024-07-11 02:18:03.104677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.903 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.903 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:12:12.903 02:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:12.903 02:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:12:12.903 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:12:12.903 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:12:12.903 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:12.903 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:12.903 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:12:13.211 [2024-07-11 02:18:03.385269] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:13.211 [2024-07-11 02:18:03.385371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745078 ] 00:12:13.211 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.211 [2024-07-11 02:18:03.445710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.211 [2024-07-11 02:18:03.536412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.211 [2024-07-11 02:18:03.536543] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:13.211 [2024-07-11 02:18:03.536566] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:13.211 [2024-07-11 02:18:03.536584] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1745025 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1745025 ']' 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1745025 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.211 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745025 00:12:13.467 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:13.467 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:13.467 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745025' 00:12:13.467 killing process with pid 1745025 00:12:13.467 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1745025 00:12:13.467 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1745025 00:12:13.724 00:12:13.724 real 0m1.010s 00:12:13.724 user 0m1.200s 00:12:13.724 sys 0m0.409s 00:12:13.724 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.724 02:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:13.724 ************************************ 00:12:13.724 END TEST exit_on_failed_rpc_init 00:12:13.724 ************************************ 00:12:13.724 02:18:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:13.724 02:18:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:12:13.724 00:12:13.724 real 0m12.856s 00:12:13.724 user 0m12.252s 00:12:13.724 sys 0m1.513s 00:12:13.724 02:18:03 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.724 02:18:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.724 ************************************ 00:12:13.724 END TEST skip_rpc 00:12:13.724 ************************************ 00:12:13.724 02:18:03 -- common/autotest_common.sh@1142 -- # return 0 00:12:13.724 02:18:03 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:12:13.724 02:18:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:13.724 02:18:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.724 02:18:03 -- common/autotest_common.sh@10 -- # set +x 00:12:13.724 ************************************ 00:12:13.724 START TEST rpc_client 00:12:13.724 ************************************ 00:12:13.724 02:18:03 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:12:13.724 * Looking for test storage... 00:12:13.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:12:13.724 02:18:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:12:13.724 OK 00:12:13.724 02:18:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:12:13.724 00:12:13.724 real 0m0.071s 00:12:13.724 user 0m0.028s 00:12:13.724 sys 0m0.047s 00:12:13.724 02:18:04 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.724 02:18:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:12:13.724 ************************************ 00:12:13.724 END TEST rpc_client 00:12:13.724 ************************************ 00:12:13.724 02:18:04 -- common/autotest_common.sh@1142 -- # return 0 00:12:13.724 02:18:04 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:12:13.724 02:18:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:13.724 02:18:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.724 02:18:04 -- common/autotest_common.sh@10 -- # set +x 00:12:13.724 ************************************ 00:12:13.724 START TEST json_config 00:12:13.724 ************************************ 00:12:13.724 02:18:04 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:12:13.724 02:18:04 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.724 02:18:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.983 02:18:04 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.983 02:18:04 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.983 02:18:04 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.983 02:18:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.983 02:18:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.983 02:18:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.983 02:18:04 json_config -- paths/export.sh@5 -- # export PATH 00:12:13.983 02:18:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@47 -- # : 0 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.983 02:18:04 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.983 02:18:04 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:12:13.983 02:18:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:12:13.983 02:18:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:12:13.983 02:18:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:12:13.983 02:18:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:12:13.983 02:18:04 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:12:13.983 02:18:04 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:12:13.983 02:18:04 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:12:13.983 02:18:04 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:12:13.984 INFO: JSON configuration test init 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.984 02:18:04 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:12:13.984 02:18:04 json_config -- json_config/common.sh@9 -- # local app=target 00:12:13.984 02:18:04 json_config -- json_config/common.sh@10 -- # shift 00:12:13.984 02:18:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:13.984 02:18:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:13.984 02:18:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:12:13.984 02:18:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:13.984 02:18:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:13.984 02:18:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1745291 00:12:13.984 02:18:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:12:13.984 02:18:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:13.984 Waiting for target to run... 00:12:13.984 02:18:04 json_config -- json_config/common.sh@25 -- # waitforlisten 1745291 /var/tmp/spdk_tgt.sock 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@829 -- # '[' -z 1745291 ']' 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:13.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.984 02:18:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.984 [2024-07-11 02:18:04.226350] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:13.984 [2024-07-11 02:18:04.226459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745291 ] 00:12:13.984 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.243 [2024-07-11 02:18:04.583036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.243 [2024-07-11 02:18:04.649886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.181 02:18:05 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.181 02:18:05 json_config -- common/autotest_common.sh@862 -- # return 0 00:12:15.181 02:18:05 json_config -- json_config/common.sh@26 -- # echo '' 00:12:15.181 00:12:15.181 02:18:05 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:12:15.181 02:18:05 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:12:15.181 02:18:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.181 02:18:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:15.181 02:18:05 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:12:15.181 02:18:05 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:12:15.181 02:18:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:15.181 02:18:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:15.181 02:18:05 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:12:15.181 02:18:05 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:12:15.181 02:18:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:12:18.474 02:18:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:18.474 02:18:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:12:18.474 02:18:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@48 -- # local get_types 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:12:18.474 02:18:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:18.474 02:18:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@55 -- # return 0 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:12:18.474 02:18:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:18.474 02:18:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:12:18.474 02:18:08 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:12:18.474 02:18:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:12:18.732 MallocForNvmf0 00:12:18.732 02:18:09 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:12:18.732 02:18:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:12:18.991 MallocForNvmf1 00:12:18.991 02:18:09 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:12:18.991 02:18:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:12:19.250 [2024-07-11 02:18:09.663048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.508 02:18:09 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:19.508 02:18:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:19.792 02:18:09 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:12:19.792 02:18:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:12:20.050 02:18:10 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:12:20.050 02:18:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:12:20.310 02:18:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:12:20.310 02:18:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:12:20.568 [2024-07-11 02:18:10.842800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:20.568 02:18:10 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:12:20.568 02:18:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.568 02:18:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:20.568 02:18:10 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:12:20.568 02:18:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.568 02:18:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:20.568 02:18:10 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:12:20.568 02:18:10 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:20.568 02:18:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:20.827 MallocBdevForConfigChangeCheck 00:12:20.827 02:18:11 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:12:20.827 02:18:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.827 02:18:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:20.827 02:18:11 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:12:20.827 02:18:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:21.396 02:18:11 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:12:21.396 INFO: shutting down applications... 00:12:21.396 02:18:11 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:12:21.396 02:18:11 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:12:21.396 02:18:11 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:12:21.396 02:18:11 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:12:23.306 Calling clear_iscsi_subsystem 00:12:23.306 Calling clear_nvmf_subsystem 00:12:23.306 Calling clear_nbd_subsystem 00:12:23.306 Calling clear_ublk_subsystem 00:12:23.306 Calling clear_vhost_blk_subsystem 00:12:23.306 Calling clear_vhost_scsi_subsystem 00:12:23.306 Calling clear_bdev_subsystem 00:12:23.306 02:18:13 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:12:23.306 02:18:13 json_config -- json_config/json_config.sh@343 -- # count=100 00:12:23.306 02:18:13 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:12:23.307 02:18:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:12:23.307 02:18:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:23.307 02:18:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:12:23.565 02:18:13 json_config -- json_config/json_config.sh@345 -- # break 00:12:23.565 02:18:13 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:12:23.565 02:18:13 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:12:23.565 02:18:13 json_config -- json_config/common.sh@31 -- # local app=target 00:12:23.565 02:18:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:23.565 02:18:13 json_config -- json_config/common.sh@35 -- # [[ -n 1745291 ]] 00:12:23.566 02:18:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1745291 00:12:23.566 02:18:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:23.566 02:18:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:23.566 02:18:13 json_config -- json_config/common.sh@41 -- # kill -0 1745291 00:12:23.566 02:18:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:12:24.132 02:18:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:12:24.132 02:18:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:24.132 02:18:14 json_config -- json_config/common.sh@41 -- # kill -0 1745291 00:12:24.132 02:18:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:24.132 02:18:14 json_config -- json_config/common.sh@43 -- # break 00:12:24.132 02:18:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:24.132 02:18:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:24.132 SPDK target shutdown done 00:12:24.132 02:18:14 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:12:24.132 INFO: relaunching applications... 00:12:24.132 02:18:14 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:24.132 02:18:14 json_config -- json_config/common.sh@9 -- # local app=target 00:12:24.132 02:18:14 json_config -- json_config/common.sh@10 -- # shift 00:12:24.132 02:18:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:24.132 02:18:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:24.132 02:18:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:12:24.132 02:18:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:24.132 02:18:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:24.132 02:18:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1746835 00:12:24.132 02:18:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:24.132 Waiting for target to run... 00:12:24.132 02:18:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:24.132 02:18:14 json_config -- json_config/common.sh@25 -- # waitforlisten 1746835 /var/tmp/spdk_tgt.sock 00:12:24.132 02:18:14 json_config -- common/autotest_common.sh@829 -- # '[' -z 1746835 ']' 00:12:24.132 02:18:14 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:24.132 02:18:14 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.132 02:18:14 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:24.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:24.132 02:18:14 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.132 02:18:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:24.132 [2024-07-11 02:18:14.335233] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:24.132 [2024-07-11 02:18:14.335340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746835 ] 00:12:24.132 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.391 [2024-07-11 02:18:14.684359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.391 [2024-07-11 02:18:14.751348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.685 [2024-07-11 02:18:17.759871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.685 [2024-07-11 02:18:17.792223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:27.685 02:18:17 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.685 02:18:17 json_config -- common/autotest_common.sh@862 -- # return 0 00:12:27.685 02:18:17 json_config -- json_config/common.sh@26 -- # echo '' 00:12:27.685 00:12:27.685 02:18:17 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:12:27.685 02:18:17 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:12:27.685 INFO: Checking if target configuration is the same... 00:12:27.685 02:18:17 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:27.685 02:18:17 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:12:27.685 02:18:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:27.685 + '[' 2 -ne 2 ']' 00:12:27.685 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:12:27.685 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:12:27.685 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:27.685 +++ basename /dev/fd/62 00:12:27.685 ++ mktemp /tmp/62.XXX 00:12:27.685 + tmp_file_1=/tmp/62.Usc 00:12:27.685 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:27.685 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:27.685 + tmp_file_2=/tmp/spdk_tgt_config.json.Jsq 00:12:27.685 + ret=0 00:12:27.685 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:12:27.943 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:12:27.943 + diff -u /tmp/62.Usc /tmp/spdk_tgt_config.json.Jsq 00:12:27.943 + echo 'INFO: JSON config files are the same' 00:12:27.943 INFO: JSON config files are the same 00:12:27.943 + rm /tmp/62.Usc /tmp/spdk_tgt_config.json.Jsq 00:12:27.943 + exit 0 00:12:27.943 02:18:18 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:12:27.943 02:18:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:12:27.943 INFO: changing configuration and checking if this can be detected... 00:12:27.943 02:18:18 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:27.943 02:18:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:28.201 02:18:18 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:28.201 02:18:18 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:12:28.201 02:18:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:28.201 + '[' 2 -ne 2 ']' 00:12:28.201 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:12:28.201 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:12:28.201 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:28.201 +++ basename /dev/fd/62 00:12:28.201 ++ mktemp /tmp/62.XXX 00:12:28.201 + tmp_file_1=/tmp/62.nRM 00:12:28.202 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:28.202 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:28.459 + tmp_file_2=/tmp/spdk_tgt_config.json.dwq 00:12:28.459 + ret=0 00:12:28.459 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:12:28.717 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:12:28.717 + diff -u /tmp/62.nRM /tmp/spdk_tgt_config.json.dwq 00:12:28.717 + ret=1 00:12:28.717 + echo '=== Start of file: /tmp/62.nRM ===' 00:12:28.717 + cat /tmp/62.nRM 00:12:28.717 + echo '=== End of file: /tmp/62.nRM ===' 00:12:28.717 + echo '' 00:12:28.717 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dwq ===' 00:12:28.717 + cat /tmp/spdk_tgt_config.json.dwq 00:12:28.717 + echo '=== End of file: /tmp/spdk_tgt_config.json.dwq ===' 00:12:28.717 + echo '' 00:12:28.717 + rm /tmp/62.nRM /tmp/spdk_tgt_config.json.dwq 00:12:28.717 + exit 1 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:12:28.717 INFO: configuration change detected. 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:12:28.717 02:18:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.717 02:18:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@317 -- # [[ -n 1746835 ]] 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:12:28.717 02:18:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.717 02:18:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@193 -- # uname -s 00:12:28.717 02:18:19 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:12:28.718 02:18:19 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:12:28.718 02:18:19 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:12:28.718 02:18:19 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:12:28.718 02:18:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.718 02:18:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:28.718 02:18:19 json_config -- json_config/json_config.sh@323 -- # killprocess 1746835 00:12:28.718 02:18:19 json_config -- common/autotest_common.sh@948 -- # '[' -z 1746835 ']' 00:12:28.718 02:18:19 json_config -- common/autotest_common.sh@952 -- # kill -0 1746835 00:12:28.718 02:18:19 json_config -- common/autotest_common.sh@953 -- # uname 00:12:28.718 02:18:19 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.718 02:18:19 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1746835 00:12:28.975 02:18:19 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:28.975 02:18:19 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:28.975 02:18:19 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1746835' 00:12:28.975 killing process with pid 1746835 00:12:28.975 02:18:19 json_config -- common/autotest_common.sh@967 -- # kill 1746835 00:12:28.975 02:18:19 json_config -- common/autotest_common.sh@972 -- # wait 1746835 00:12:30.350 02:18:20 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:30.350 02:18:20 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:12:30.350 02:18:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:30.350 02:18:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:30.350 02:18:20 json_config -- json_config/json_config.sh@328 -- # return 0 00:12:30.350 02:18:20 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:12:30.350 INFO: Success 00:12:30.350 00:12:30.350 real 0m16.624s 00:12:30.350 user 0m19.301s 00:12:30.350 sys 0m2.046s 00:12:30.350 02:18:20 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.350 02:18:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:30.350 ************************************ 00:12:30.350 END TEST json_config 00:12:30.350 ************************************ 00:12:30.350 02:18:20 -- common/autotest_common.sh@1142 -- # return 0 00:12:30.350 02:18:20 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:12:30.350 02:18:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:30.350 02:18:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.350 02:18:20 -- common/autotest_common.sh@10 -- # set +x 00:12:30.608 ************************************ 00:12:30.608 START TEST json_config_extra_key 00:12:30.608 ************************************ 00:12:30.608 02:18:20 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:12:30.608 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.608 02:18:20 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.608 02:18:20 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.608 02:18:20 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.608 02:18:20 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.608 02:18:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.609 02:18:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.609 02:18:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.609 02:18:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:12:30.609 02:18:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.609 02:18:20 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:12:30.609 02:18:20 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.609 02:18:20 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.609 02:18:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.609 02:18:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.609 02:18:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.609 02:18:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.609 02:18:20 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.609 02:18:20 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:12:30.609 INFO: launching applications... 00:12:30.609 02:18:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1747550 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:30.609 Waiting for target to run... 00:12:30.609 02:18:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1747550 /var/tmp/spdk_tgt.sock 00:12:30.609 02:18:20 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1747550 ']' 00:12:30.609 02:18:20 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:30.609 02:18:20 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.609 02:18:20 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:30.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:30.609 02:18:20 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.609 02:18:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:30.609 [2024-07-11 02:18:20.905630] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:30.609 [2024-07-11 02:18:20.905742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747550 ] 00:12:30.609 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.867 [2024-07-11 02:18:21.260194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.125 [2024-07-11 02:18:21.327036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.692 02:18:21 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.692 02:18:21 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:12:31.692 02:18:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:12:31.692 00:12:31.692 02:18:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:12:31.692 INFO: shutting down applications... 00:12:31.692 02:18:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:12:31.692 02:18:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:12:31.692 02:18:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:31.692 02:18:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1747550 ]] 00:12:31.692 02:18:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1747550 00:12:31.692 02:18:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:31.692 02:18:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:31.692 02:18:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1747550 00:12:31.692 02:18:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:32.261 02:18:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:32.261 02:18:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:32.261 02:18:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1747550 00:12:32.261 02:18:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:32.261 02:18:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:12:32.261 02:18:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:32.261 02:18:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:32.261 SPDK target shutdown done 00:12:32.261 02:18:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:12:32.261 Success 00:12:32.261 00:12:32.261 real 0m1.646s 00:12:32.261 user 0m1.498s 00:12:32.261 sys 0m0.478s 00:12:32.261 02:18:22 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.261 02:18:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:32.261 ************************************ 00:12:32.261 END TEST json_config_extra_key 00:12:32.261 ************************************ 00:12:32.261 02:18:22 -- common/autotest_common.sh@1142 -- # return 0 00:12:32.261 02:18:22 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:32.261 02:18:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:32.261 02:18:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.261 02:18:22 -- common/autotest_common.sh@10 -- # set +x 00:12:32.261 ************************************ 00:12:32.261 START TEST alias_rpc 00:12:32.261 ************************************ 00:12:32.261 02:18:22 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:32.261 * Looking for test storage... 00:12:32.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:12:32.261 02:18:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:32.261 02:18:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1747712 00:12:32.261 02:18:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1747712 00:12:32.261 02:18:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:32.261 02:18:22 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1747712 ']' 00:12:32.261 02:18:22 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.261 02:18:22 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.261 02:18:22 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.261 02:18:22 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.261 02:18:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.261 [2024-07-11 02:18:22.593317] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:32.261 [2024-07-11 02:18:22.593423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747712 ] 00:12:32.261 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.261 [2024-07-11 02:18:22.655041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.519 [2024-07-11 02:18:22.742711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.778 02:18:22 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.778 02:18:22 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:32.778 02:18:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:12:33.037 02:18:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1747712 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1747712 ']' 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1747712 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1747712 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1747712' 00:12:33.037 killing process with pid 1747712 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@967 -- # kill 1747712 00:12:33.037 02:18:23 alias_rpc -- common/autotest_common.sh@972 -- # wait 1747712 00:12:33.295 00:12:33.295 real 0m1.089s 00:12:33.295 user 0m1.303s 00:12:33.295 sys 0m0.375s 00:12:33.295 02:18:23 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.295 02:18:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.295 ************************************ 00:12:33.295 END TEST alias_rpc 00:12:33.295 ************************************ 00:12:33.295 02:18:23 -- common/autotest_common.sh@1142 -- # return 0 00:12:33.295 02:18:23 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:12:33.295 02:18:23 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:12:33.295 02:18:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:33.295 02:18:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.295 02:18:23 -- common/autotest_common.sh@10 -- # set +x 00:12:33.295 ************************************ 00:12:33.295 START TEST spdkcli_tcp 00:12:33.295 ************************************ 00:12:33.295 02:18:23 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:12:33.295 * Looking for test storage... 00:12:33.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:12:33.295 02:18:23 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:33.295 02:18:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1747866 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:12:33.295 02:18:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1747866 00:12:33.295 02:18:23 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1747866 ']' 00:12:33.295 02:18:23 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.295 02:18:23 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.295 02:18:23 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.295 02:18:23 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.295 02:18:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.551 [2024-07-11 02:18:23.734674] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:33.551 [2024-07-11 02:18:23.734781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747866 ] 00:12:33.551 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.551 [2024-07-11 02:18:23.796713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:33.551 [2024-07-11 02:18:23.888537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.551 [2024-07-11 02:18:23.888572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.809 02:18:24 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.809 02:18:24 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:12:33.809 02:18:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1747963 00:12:33.809 02:18:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:12:33.809 02:18:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:12:34.068 [ 00:12:34.069 "bdev_malloc_delete", 00:12:34.069 "bdev_malloc_create", 00:12:34.069 "bdev_null_resize", 00:12:34.069 "bdev_null_delete", 00:12:34.069 "bdev_null_create", 00:12:34.069 "bdev_nvme_cuse_unregister", 00:12:34.069 "bdev_nvme_cuse_register", 00:12:34.069 "bdev_opal_new_user", 00:12:34.069 "bdev_opal_set_lock_state", 00:12:34.069 "bdev_opal_delete", 00:12:34.069 "bdev_opal_get_info", 00:12:34.069 "bdev_opal_create", 00:12:34.069 "bdev_nvme_opal_revert", 00:12:34.069 "bdev_nvme_opal_init", 00:12:34.069 "bdev_nvme_send_cmd", 00:12:34.069 "bdev_nvme_get_path_iostat", 00:12:34.069 "bdev_nvme_get_mdns_discovery_info", 00:12:34.069 "bdev_nvme_stop_mdns_discovery", 00:12:34.069 "bdev_nvme_start_mdns_discovery", 00:12:34.069 "bdev_nvme_set_multipath_policy", 00:12:34.069 "bdev_nvme_set_preferred_path", 00:12:34.069 "bdev_nvme_get_io_paths", 00:12:34.069 "bdev_nvme_remove_error_injection", 00:12:34.069 "bdev_nvme_add_error_injection", 00:12:34.069 "bdev_nvme_get_discovery_info", 00:12:34.069 "bdev_nvme_stop_discovery", 00:12:34.069 "bdev_nvme_start_discovery", 00:12:34.069 "bdev_nvme_get_controller_health_info", 00:12:34.069 "bdev_nvme_disable_controller", 00:12:34.069 "bdev_nvme_enable_controller", 00:12:34.069 "bdev_nvme_reset_controller", 00:12:34.069 "bdev_nvme_get_transport_statistics", 00:12:34.069 "bdev_nvme_apply_firmware", 00:12:34.069 "bdev_nvme_detach_controller", 00:12:34.069 "bdev_nvme_get_controllers", 00:12:34.069 "bdev_nvme_attach_controller", 00:12:34.069 "bdev_nvme_set_hotplug", 00:12:34.069 "bdev_nvme_set_options", 00:12:34.069 "bdev_passthru_delete", 00:12:34.069 "bdev_passthru_create", 00:12:34.069 "bdev_lvol_set_parent_bdev", 00:12:34.069 "bdev_lvol_set_parent", 00:12:34.069 "bdev_lvol_check_shallow_copy", 00:12:34.069 "bdev_lvol_start_shallow_copy", 00:12:34.069 "bdev_lvol_grow_lvstore", 00:12:34.069 "bdev_lvol_get_lvols", 00:12:34.069 "bdev_lvol_get_lvstores", 00:12:34.069 "bdev_lvol_delete", 00:12:34.069 "bdev_lvol_set_read_only", 00:12:34.069 "bdev_lvol_resize", 00:12:34.069 "bdev_lvol_decouple_parent", 00:12:34.069 "bdev_lvol_inflate", 00:12:34.069 "bdev_lvol_rename", 00:12:34.069 "bdev_lvol_clone_bdev", 00:12:34.069 "bdev_lvol_clone", 00:12:34.069 "bdev_lvol_snapshot", 00:12:34.069 "bdev_lvol_create", 00:12:34.069 "bdev_lvol_delete_lvstore", 00:12:34.069 "bdev_lvol_rename_lvstore", 00:12:34.069 "bdev_lvol_create_lvstore", 00:12:34.069 "bdev_raid_set_options", 00:12:34.069 "bdev_raid_remove_base_bdev", 00:12:34.069 "bdev_raid_add_base_bdev", 00:12:34.069 "bdev_raid_delete", 00:12:34.069 "bdev_raid_create", 00:12:34.069 "bdev_raid_get_bdevs", 00:12:34.069 "bdev_error_inject_error", 00:12:34.069 "bdev_error_delete", 00:12:34.069 "bdev_error_create", 00:12:34.069 "bdev_split_delete", 00:12:34.069 "bdev_split_create", 00:12:34.069 "bdev_delay_delete", 00:12:34.069 "bdev_delay_create", 00:12:34.069 "bdev_delay_update_latency", 00:12:34.069 "bdev_zone_block_delete", 00:12:34.069 "bdev_zone_block_create", 00:12:34.069 "blobfs_create", 00:12:34.069 "blobfs_detect", 00:12:34.069 "blobfs_set_cache_size", 00:12:34.069 "bdev_aio_delete", 00:12:34.069 "bdev_aio_rescan", 00:12:34.069 "bdev_aio_create", 00:12:34.069 "bdev_ftl_set_property", 00:12:34.069 "bdev_ftl_get_properties", 00:12:34.069 "bdev_ftl_get_stats", 00:12:34.069 "bdev_ftl_unmap", 00:12:34.069 "bdev_ftl_unload", 00:12:34.069 "bdev_ftl_delete", 00:12:34.069 "bdev_ftl_load", 00:12:34.069 "bdev_ftl_create", 00:12:34.069 "bdev_virtio_attach_controller", 00:12:34.069 "bdev_virtio_scsi_get_devices", 00:12:34.069 "bdev_virtio_detach_controller", 00:12:34.069 "bdev_virtio_blk_set_hotplug", 00:12:34.069 "bdev_iscsi_delete", 00:12:34.069 "bdev_iscsi_create", 00:12:34.069 "bdev_iscsi_set_options", 00:12:34.069 "accel_error_inject_error", 00:12:34.069 "ioat_scan_accel_module", 00:12:34.069 "dsa_scan_accel_module", 00:12:34.069 "iaa_scan_accel_module", 00:12:34.069 "vfu_virtio_create_scsi_endpoint", 00:12:34.069 "vfu_virtio_scsi_remove_target", 00:12:34.069 "vfu_virtio_scsi_add_target", 00:12:34.069 "vfu_virtio_create_blk_endpoint", 00:12:34.069 "vfu_virtio_delete_endpoint", 00:12:34.069 "keyring_file_remove_key", 00:12:34.069 "keyring_file_add_key", 00:12:34.069 "keyring_linux_set_options", 00:12:34.069 "iscsi_get_histogram", 00:12:34.069 "iscsi_enable_histogram", 00:12:34.069 "iscsi_set_options", 00:12:34.069 "iscsi_get_auth_groups", 00:12:34.069 "iscsi_auth_group_remove_secret", 00:12:34.069 "iscsi_auth_group_add_secret", 00:12:34.069 "iscsi_delete_auth_group", 00:12:34.069 "iscsi_create_auth_group", 00:12:34.069 "iscsi_set_discovery_auth", 00:12:34.069 "iscsi_get_options", 00:12:34.069 "iscsi_target_node_request_logout", 00:12:34.069 "iscsi_target_node_set_redirect", 00:12:34.069 "iscsi_target_node_set_auth", 00:12:34.069 "iscsi_target_node_add_lun", 00:12:34.069 "iscsi_get_stats", 00:12:34.069 "iscsi_get_connections", 00:12:34.069 "iscsi_portal_group_set_auth", 00:12:34.069 "iscsi_start_portal_group", 00:12:34.069 "iscsi_delete_portal_group", 00:12:34.069 "iscsi_create_portal_group", 00:12:34.069 "iscsi_get_portal_groups", 00:12:34.069 "iscsi_delete_target_node", 00:12:34.069 "iscsi_target_node_remove_pg_ig_maps", 00:12:34.069 "iscsi_target_node_add_pg_ig_maps", 00:12:34.069 "iscsi_create_target_node", 00:12:34.069 "iscsi_get_target_nodes", 00:12:34.069 "iscsi_delete_initiator_group", 00:12:34.069 "iscsi_initiator_group_remove_initiators", 00:12:34.069 "iscsi_initiator_group_add_initiators", 00:12:34.069 "iscsi_create_initiator_group", 00:12:34.069 "iscsi_get_initiator_groups", 00:12:34.069 "nvmf_set_crdt", 00:12:34.069 "nvmf_set_config", 00:12:34.069 "nvmf_set_max_subsystems", 00:12:34.069 "nvmf_stop_mdns_prr", 00:12:34.069 "nvmf_publish_mdns_prr", 00:12:34.069 "nvmf_subsystem_get_listeners", 00:12:34.069 "nvmf_subsystem_get_qpairs", 00:12:34.069 "nvmf_subsystem_get_controllers", 00:12:34.069 "nvmf_get_stats", 00:12:34.069 "nvmf_get_transports", 00:12:34.069 "nvmf_create_transport", 00:12:34.069 "nvmf_get_targets", 00:12:34.069 "nvmf_delete_target", 00:12:34.069 "nvmf_create_target", 00:12:34.069 "nvmf_subsystem_allow_any_host", 00:12:34.069 "nvmf_subsystem_remove_host", 00:12:34.069 "nvmf_subsystem_add_host", 00:12:34.069 "nvmf_ns_remove_host", 00:12:34.069 "nvmf_ns_add_host", 00:12:34.069 "nvmf_subsystem_remove_ns", 00:12:34.069 "nvmf_subsystem_add_ns", 00:12:34.070 "nvmf_subsystem_listener_set_ana_state", 00:12:34.070 "nvmf_discovery_get_referrals", 00:12:34.070 "nvmf_discovery_remove_referral", 00:12:34.070 "nvmf_discovery_add_referral", 00:12:34.070 "nvmf_subsystem_remove_listener", 00:12:34.070 "nvmf_subsystem_add_listener", 00:12:34.070 "nvmf_delete_subsystem", 00:12:34.070 "nvmf_create_subsystem", 00:12:34.070 "nvmf_get_subsystems", 00:12:34.070 "env_dpdk_get_mem_stats", 00:12:34.070 "nbd_get_disks", 00:12:34.070 "nbd_stop_disk", 00:12:34.070 "nbd_start_disk", 00:12:34.070 "ublk_recover_disk", 00:12:34.070 "ublk_get_disks", 00:12:34.070 "ublk_stop_disk", 00:12:34.070 "ublk_start_disk", 00:12:34.070 "ublk_destroy_target", 00:12:34.070 "ublk_create_target", 00:12:34.070 "virtio_blk_create_transport", 00:12:34.070 "virtio_blk_get_transports", 00:12:34.070 "vhost_controller_set_coalescing", 00:12:34.070 "vhost_get_controllers", 00:12:34.070 "vhost_delete_controller", 00:12:34.070 "vhost_create_blk_controller", 00:12:34.070 "vhost_scsi_controller_remove_target", 00:12:34.070 "vhost_scsi_controller_add_target", 00:12:34.070 "vhost_start_scsi_controller", 00:12:34.070 "vhost_create_scsi_controller", 00:12:34.070 "thread_set_cpumask", 00:12:34.070 "framework_get_governor", 00:12:34.070 "framework_get_scheduler", 00:12:34.070 "framework_set_scheduler", 00:12:34.070 "framework_get_reactors", 00:12:34.070 "thread_get_io_channels", 00:12:34.070 "thread_get_pollers", 00:12:34.070 "thread_get_stats", 00:12:34.070 "framework_monitor_context_switch", 00:12:34.070 "spdk_kill_instance", 00:12:34.070 "log_enable_timestamps", 00:12:34.070 "log_get_flags", 00:12:34.070 "log_clear_flag", 00:12:34.070 "log_set_flag", 00:12:34.070 "log_get_level", 00:12:34.070 "log_set_level", 00:12:34.070 "log_get_print_level", 00:12:34.070 "log_set_print_level", 00:12:34.070 "framework_enable_cpumask_locks", 00:12:34.070 "framework_disable_cpumask_locks", 00:12:34.070 "framework_wait_init", 00:12:34.070 "framework_start_init", 00:12:34.070 "scsi_get_devices", 00:12:34.070 "bdev_get_histogram", 00:12:34.070 "bdev_enable_histogram", 00:12:34.070 "bdev_set_qos_limit", 00:12:34.070 "bdev_set_qd_sampling_period", 00:12:34.070 "bdev_get_bdevs", 00:12:34.070 "bdev_reset_iostat", 00:12:34.070 "bdev_get_iostat", 00:12:34.070 "bdev_examine", 00:12:34.070 "bdev_wait_for_examine", 00:12:34.070 "bdev_set_options", 00:12:34.070 "notify_get_notifications", 00:12:34.070 "notify_get_types", 00:12:34.070 "accel_get_stats", 00:12:34.070 "accel_set_options", 00:12:34.070 "accel_set_driver", 00:12:34.070 "accel_crypto_key_destroy", 00:12:34.070 "accel_crypto_keys_get", 00:12:34.070 "accel_crypto_key_create", 00:12:34.070 "accel_assign_opc", 00:12:34.070 "accel_get_module_info", 00:12:34.070 "accel_get_opc_assignments", 00:12:34.070 "vmd_rescan", 00:12:34.070 "vmd_remove_device", 00:12:34.070 "vmd_enable", 00:12:34.070 "sock_get_default_impl", 00:12:34.070 "sock_set_default_impl", 00:12:34.070 "sock_impl_set_options", 00:12:34.070 "sock_impl_get_options", 00:12:34.070 "iobuf_get_stats", 00:12:34.070 "iobuf_set_options", 00:12:34.070 "keyring_get_keys", 00:12:34.070 "framework_get_pci_devices", 00:12:34.070 "framework_get_config", 00:12:34.070 "framework_get_subsystems", 00:12:34.070 "vfu_tgt_set_base_path", 00:12:34.070 "trace_get_info", 00:12:34.070 "trace_get_tpoint_group_mask", 00:12:34.070 "trace_disable_tpoint_group", 00:12:34.070 "trace_enable_tpoint_group", 00:12:34.070 "trace_clear_tpoint_mask", 00:12:34.070 "trace_set_tpoint_mask", 00:12:34.070 "spdk_get_version", 00:12:34.070 "rpc_get_methods" 00:12:34.070 ] 00:12:34.070 02:18:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:34.070 02:18:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:34.070 02:18:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1747866 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1747866 ']' 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1747866 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1747866 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1747866' 00:12:34.070 killing process with pid 1747866 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1747866 00:12:34.070 02:18:24 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1747866 00:12:34.329 00:12:34.329 real 0m1.100s 00:12:34.329 user 0m2.054s 00:12:34.329 sys 0m0.416s 00:12:34.329 02:18:24 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.329 02:18:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:34.329 ************************************ 00:12:34.329 END TEST spdkcli_tcp 00:12:34.329 ************************************ 00:12:34.329 02:18:24 -- common/autotest_common.sh@1142 -- # return 0 00:12:34.329 02:18:24 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:34.329 02:18:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:34.329 02:18:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.329 02:18:24 -- common/autotest_common.sh@10 -- # set +x 00:12:34.588 ************************************ 00:12:34.588 START TEST dpdk_mem_utility 00:12:34.588 ************************************ 00:12:34.588 02:18:24 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:34.588 * Looking for test storage... 00:12:34.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:12:34.588 02:18:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:12:34.588 02:18:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1748034 00:12:34.588 02:18:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1748034 00:12:34.588 02:18:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:34.588 02:18:24 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1748034 ']' 00:12:34.588 02:18:24 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.588 02:18:24 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.588 02:18:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.588 02:18:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.588 02:18:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:34.588 [2024-07-11 02:18:24.873940] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:34.588 [2024-07-11 02:18:24.874034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748034 ] 00:12:34.588 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.588 [2024-07-11 02:18:24.933631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.849 [2024-07-11 02:18:25.021044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.849 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.849 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:12:34.849 02:18:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:34.849 02:18:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:34.849 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.849 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:34.849 { 00:12:34.849 "filename": "/tmp/spdk_mem_dump.txt" 00:12:34.849 } 00:12:34.849 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.849 02:18:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:12:35.107 DPDK memory size 814.000000 MiB in 1 heap(s) 00:12:35.107 1 heaps totaling size 814.000000 MiB 00:12:35.107 size: 814.000000 MiB heap id: 0 00:12:35.107 end heaps---------- 00:12:35.107 8 mempools totaling size 598.116089 MiB 00:12:35.107 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:35.107 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:35.107 size: 84.521057 MiB name: bdev_io_1748034 00:12:35.107 size: 51.011292 MiB name: evtpool_1748034 00:12:35.107 size: 50.003479 MiB name: msgpool_1748034 00:12:35.107 size: 21.763794 MiB name: PDU_Pool 00:12:35.107 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:35.107 size: 0.026123 MiB name: Session_Pool 00:12:35.107 end mempools------- 00:12:35.107 6 memzones totaling size 4.142822 MiB 00:12:35.107 size: 1.000366 MiB name: RG_ring_0_1748034 00:12:35.107 size: 1.000366 MiB name: RG_ring_1_1748034 00:12:35.107 size: 1.000366 MiB name: RG_ring_4_1748034 00:12:35.107 size: 1.000366 MiB name: RG_ring_5_1748034 00:12:35.107 size: 0.125366 MiB name: RG_ring_2_1748034 00:12:35.107 size: 0.015991 MiB name: RG_ring_3_1748034 00:12:35.107 end memzones------- 00:12:35.107 02:18:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:12:35.107 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:12:35.108 list of free elements. size: 12.519348 MiB 00:12:35.108 element at address: 0x200000400000 with size: 1.999512 MiB 00:12:35.108 element at address: 0x200018e00000 with size: 0.999878 MiB 00:12:35.108 element at address: 0x200019000000 with size: 0.999878 MiB 00:12:35.108 element at address: 0x200003e00000 with size: 0.996277 MiB 00:12:35.108 element at address: 0x200031c00000 with size: 0.994446 MiB 00:12:35.108 element at address: 0x200013800000 with size: 0.978699 MiB 00:12:35.108 element at address: 0x200007000000 with size: 0.959839 MiB 00:12:35.108 element at address: 0x200019200000 with size: 0.936584 MiB 00:12:35.108 element at address: 0x200000200000 with size: 0.841614 MiB 00:12:35.108 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:12:35.108 element at address: 0x20000b200000 with size: 0.490723 MiB 00:12:35.108 element at address: 0x200000800000 with size: 0.487793 MiB 00:12:35.108 element at address: 0x200019400000 with size: 0.485657 MiB 00:12:35.108 element at address: 0x200027e00000 with size: 0.410034 MiB 00:12:35.108 element at address: 0x200003a00000 with size: 0.355530 MiB 00:12:35.108 list of standard malloc elements. size: 199.218079 MiB 00:12:35.108 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:12:35.108 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:12:35.108 element at address: 0x200018efff80 with size: 1.000122 MiB 00:12:35.108 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:12:35.108 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:12:35.108 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:12:35.108 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:12:35.108 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:12:35.108 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:12:35.108 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:12:35.108 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:12:35.108 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200003adb300 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200003adb500 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200003affa80 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200003affb40 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:12:35.108 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:12:35.108 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:12:35.108 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:12:35.108 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:12:35.108 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:12:35.108 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200027e69040 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:12:35.108 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:12:35.108 list of memzone associated elements. size: 602.262573 MiB 00:12:35.108 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:12:35.108 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:35.108 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:12:35.108 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:35.108 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:12:35.108 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1748034_0 00:12:35.108 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:12:35.108 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1748034_0 00:12:35.108 element at address: 0x200003fff380 with size: 48.003052 MiB 00:12:35.108 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1748034_0 00:12:35.108 element at address: 0x2000195be940 with size: 20.255554 MiB 00:12:35.108 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:35.108 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:12:35.108 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:35.108 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:12:35.108 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1748034 00:12:35.108 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:12:35.108 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1748034 00:12:35.108 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:12:35.108 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1748034 00:12:35.108 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:12:35.108 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:35.108 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:12:35.108 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:35.108 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:12:35.108 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:35.108 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:12:35.108 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:35.108 element at address: 0x200003eff180 with size: 1.000488 MiB 00:12:35.108 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1748034 00:12:35.108 element at address: 0x200003affc00 with size: 1.000488 MiB 00:12:35.108 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1748034 00:12:35.108 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:12:35.108 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1748034 00:12:35.108 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:12:35.108 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1748034 00:12:35.108 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:12:35.108 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1748034 00:12:35.108 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:12:35.108 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:35.108 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:12:35.108 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:35.108 element at address: 0x20001947c540 with size: 0.250488 MiB 00:12:35.109 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:35.109 element at address: 0x200003adf880 with size: 0.125488 MiB 00:12:35.109 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1748034 00:12:35.109 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:12:35.109 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:35.109 element at address: 0x200027e69100 with size: 0.023743 MiB 00:12:35.109 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:35.109 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:12:35.109 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1748034 00:12:35.109 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:12:35.109 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:35.109 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:12:35.109 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1748034 00:12:35.109 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:12:35.109 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1748034 00:12:35.109 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:12:35.109 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:35.109 02:18:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:35.109 02:18:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1748034 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1748034 ']' 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1748034 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1748034 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1748034' 00:12:35.109 killing process with pid 1748034 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1748034 00:12:35.109 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1748034 00:12:35.367 00:12:35.367 real 0m0.893s 00:12:35.367 user 0m0.957s 00:12:35.367 sys 0m0.362s 00:12:35.367 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.367 02:18:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:35.367 ************************************ 00:12:35.367 END TEST dpdk_mem_utility 00:12:35.367 ************************************ 00:12:35.367 02:18:25 -- common/autotest_common.sh@1142 -- # return 0 00:12:35.367 02:18:25 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:12:35.367 02:18:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:35.367 02:18:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.367 02:18:25 -- common/autotest_common.sh@10 -- # set +x 00:12:35.367 ************************************ 00:12:35.367 START TEST event 00:12:35.367 ************************************ 00:12:35.367 02:18:25 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:12:35.367 * Looking for test storage... 00:12:35.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:12:35.367 02:18:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:12:35.367 02:18:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:12:35.367 02:18:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:35.367 02:18:25 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:12:35.367 02:18:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.367 02:18:25 event -- common/autotest_common.sh@10 -- # set +x 00:12:35.367 ************************************ 00:12:35.367 START TEST event_perf 00:12:35.367 ************************************ 00:12:35.367 02:18:25 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:35.625 Running I/O for 1 seconds...[2024-07-11 02:18:25.793625] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:35.625 [2024-07-11 02:18:25.793690] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748193 ] 00:12:35.625 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.625 [2024-07-11 02:18:25.850992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.625 [2024-07-11 02:18:25.941013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.625 [2024-07-11 02:18:25.941065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.625 [2024-07-11 02:18:25.941118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.625 [2024-07-11 02:18:25.941121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.998 Running I/O for 1 seconds... 00:12:36.998 lcore 0: 233473 00:12:36.998 lcore 1: 233473 00:12:36.998 lcore 2: 233472 00:12:36.998 lcore 3: 233472 00:12:36.998 done. 00:12:36.998 00:12:36.998 real 0m1.224s 00:12:36.998 user 0m4.148s 00:12:36.998 sys 0m0.068s 00:12:36.998 02:18:27 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.998 02:18:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:12:36.998 ************************************ 00:12:36.998 END TEST event_perf 00:12:36.998 ************************************ 00:12:36.998 02:18:27 event -- common/autotest_common.sh@1142 -- # return 0 00:12:36.998 02:18:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:12:36.998 02:18:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:36.998 02:18:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.998 02:18:27 event -- common/autotest_common.sh@10 -- # set +x 00:12:36.998 ************************************ 00:12:36.998 START TEST event_reactor 00:12:36.998 ************************************ 00:12:36.998 02:18:27 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:12:36.998 [2024-07-11 02:18:27.070041] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:36.998 [2024-07-11 02:18:27.070104] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748316 ] 00:12:36.998 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.998 [2024-07-11 02:18:27.127800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.998 [2024-07-11 02:18:27.217910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.932 test_start 00:12:37.932 oneshot 00:12:37.932 tick 100 00:12:37.932 tick 100 00:12:37.932 tick 250 00:12:37.932 tick 100 00:12:37.932 tick 100 00:12:37.932 tick 100 00:12:37.932 tick 250 00:12:37.932 tick 500 00:12:37.932 tick 100 00:12:37.932 tick 100 00:12:37.932 tick 250 00:12:37.932 tick 100 00:12:37.932 tick 100 00:12:37.932 test_end 00:12:37.932 00:12:37.932 real 0m1.222s 00:12:37.932 user 0m1.152s 00:12:37.932 sys 0m0.063s 00:12:37.932 02:18:28 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.932 02:18:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:37.932 ************************************ 00:12:37.932 END TEST event_reactor 00:12:37.932 ************************************ 00:12:37.932 02:18:28 event -- common/autotest_common.sh@1142 -- # return 0 00:12:37.932 02:18:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:37.932 02:18:28 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:37.932 02:18:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.932 02:18:28 event -- common/autotest_common.sh@10 -- # set +x 00:12:37.932 ************************************ 00:12:37.932 START TEST event_reactor_perf 00:12:37.932 ************************************ 00:12:37.932 02:18:28 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:37.932 [2024-07-11 02:18:28.348167] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:37.932 [2024-07-11 02:18:28.348238] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748527 ] 00:12:38.190 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.190 [2024-07-11 02:18:28.407045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.190 [2024-07-11 02:18:28.497292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.564 test_start 00:12:39.564 test_end 00:12:39.564 Performance: 328231 events per second 00:12:39.564 00:12:39.564 real 0m1.227s 00:12:39.564 user 0m1.140s 00:12:39.564 sys 0m0.079s 00:12:39.564 02:18:29 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.564 02:18:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:39.564 ************************************ 00:12:39.564 END TEST event_reactor_perf 00:12:39.564 ************************************ 00:12:39.564 02:18:29 event -- common/autotest_common.sh@1142 -- # return 0 00:12:39.564 02:18:29 event -- event/event.sh@49 -- # uname -s 00:12:39.564 02:18:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:39.564 02:18:29 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:12:39.564 02:18:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:39.564 02:18:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.564 02:18:29 event -- common/autotest_common.sh@10 -- # set +x 00:12:39.564 ************************************ 00:12:39.564 START TEST event_scheduler 00:12:39.564 ************************************ 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:12:39.564 * Looking for test storage... 00:12:39.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:12:39.564 02:18:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:39.564 02:18:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1748673 00:12:39.564 02:18:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:39.564 02:18:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:39.564 02:18:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1748673 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1748673 ']' 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:39.564 [2024-07-11 02:18:29.721127] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:39.564 [2024-07-11 02:18:29.721231] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748673 ] 00:12:39.564 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.564 [2024-07-11 02:18:29.781023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.564 [2024-07-11 02:18:29.872077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.564 [2024-07-11 02:18:29.872128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.564 [2024-07-11 02:18:29.872178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.564 [2024-07-11 02:18:29.872181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:12:39.564 02:18:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:39.564 [2024-07-11 02:18:29.977174] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:12:39.564 [2024-07-11 02:18:29.977207] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:12:39.564 [2024-07-11 02:18:29.977226] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:12:39.564 [2024-07-11 02:18:29.977239] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:12:39.564 [2024-07-11 02:18:29.977250] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.564 02:18:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.564 02:18:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 [2024-07-11 02:18:30.065310] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:39.823 02:18:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:39.823 02:18:30 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:39.823 02:18:30 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 ************************************ 00:12:39.823 START TEST scheduler_create_thread 00:12:39.823 ************************************ 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 2 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 3 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 4 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 5 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 6 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 7 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 8 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 9 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 10 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.823 02:18:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:41.226 02:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.226 00:12:41.226 real 0m1.172s 00:12:41.226 user 0m0.011s 00:12:41.226 sys 0m0.004s 00:12:41.226 02:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.226 02:18:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:41.226 ************************************ 00:12:41.226 END TEST scheduler_create_thread 00:12:41.226 ************************************ 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:12:41.226 02:18:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:41.226 02:18:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1748673 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1748673 ']' 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1748673 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1748673 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1748673' 00:12:41.226 killing process with pid 1748673 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1748673 00:12:41.226 02:18:31 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1748673 00:12:41.486 [2024-07-11 02:18:31.743327] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:41.486 00:12:41.486 real 0m2.262s 00:12:41.486 user 0m2.770s 00:12:41.486 sys 0m0.305s 00:12:41.486 02:18:31 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.486 02:18:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:41.486 ************************************ 00:12:41.486 END TEST event_scheduler 00:12:41.486 ************************************ 00:12:41.745 02:18:31 event -- common/autotest_common.sh@1142 -- # return 0 00:12:41.745 02:18:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:41.745 02:18:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:41.745 02:18:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:41.746 02:18:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.746 02:18:31 event -- common/autotest_common.sh@10 -- # set +x 00:12:41.746 ************************************ 00:12:41.746 START TEST app_repeat 00:12:41.746 ************************************ 00:12:41.746 02:18:31 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1748927 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1748927' 00:12:41.746 Process app_repeat pid: 1748927 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:41.746 spdk_app_start Round 0 00:12:41.746 02:18:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1748927 /var/tmp/spdk-nbd.sock 00:12:41.746 02:18:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1748927 ']' 00:12:41.746 02:18:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:41.746 02:18:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.746 02:18:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:41.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:41.746 02:18:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.746 02:18:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:41.746 [2024-07-11 02:18:31.966714] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:12:41.746 [2024-07-11 02:18:31.966787] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748927 ] 00:12:41.746 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.746 [2024-07-11 02:18:32.025755] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:41.746 [2024-07-11 02:18:32.117527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.746 [2024-07-11 02:18:32.117533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.004 02:18:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.004 02:18:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:12:42.004 02:18:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:42.261 Malloc0 00:12:42.261 02:18:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:42.519 Malloc1 00:12:42.519 02:18:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.519 02:18:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:42.776 /dev/nbd0 00:12:42.776 02:18:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.776 02:18:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.776 02:18:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:42.776 02:18:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:12:42.776 02:18:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:42.776 02:18:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:42.776 02:18:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:42.776 02:18:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:12:42.776 02:18:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:42.776 02:18:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:42.776 02:18:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:43.034 1+0 records in 00:12:43.034 1+0 records out 00:12:43.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158916 s, 25.8 MB/s 00:12:43.034 02:18:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:43.034 02:18:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:12:43.034 02:18:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:43.034 02:18:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:43.034 02:18:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:12:43.034 02:18:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.034 02:18:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:43.034 02:18:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:43.291 /dev/nbd1 00:12:43.291 02:18:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:43.291 02:18:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:43.291 02:18:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:43.291 02:18:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:12:43.291 02:18:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:43.292 1+0 records in 00:12:43.292 1+0 records out 00:12:43.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236886 s, 17.3 MB/s 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:43.292 02:18:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:12:43.292 02:18:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.292 02:18:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:43.292 02:18:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:43.292 02:18:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.292 02:18:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:43.548 02:18:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:43.548 { 00:12:43.548 "nbd_device": "/dev/nbd0", 00:12:43.548 "bdev_name": "Malloc0" 00:12:43.548 }, 00:12:43.548 { 00:12:43.548 "nbd_device": "/dev/nbd1", 00:12:43.548 "bdev_name": "Malloc1" 00:12:43.548 } 00:12:43.548 ]' 00:12:43.548 02:18:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:43.548 { 00:12:43.548 "nbd_device": "/dev/nbd0", 00:12:43.548 "bdev_name": "Malloc0" 00:12:43.548 }, 00:12:43.548 { 00:12:43.548 "nbd_device": "/dev/nbd1", 00:12:43.548 "bdev_name": "Malloc1" 00:12:43.548 } 00:12:43.548 ]' 00:12:43.548 02:18:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:43.548 02:18:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:43.548 /dev/nbd1' 00:12:43.548 02:18:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:43.548 /dev/nbd1' 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:43.549 256+0 records in 00:12:43.549 256+0 records out 00:12:43.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605345 s, 173 MB/s 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:43.549 256+0 records in 00:12:43.549 256+0 records out 00:12:43.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251382 s, 41.7 MB/s 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:43.549 256+0 records in 00:12:43.549 256+0 records out 00:12:43.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263979 s, 39.7 MB/s 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:12:43.549 02:18:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:43.805 02:18:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:43.805 02:18:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.805 02:18:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:43.805 02:18:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.805 02:18:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:43.805 02:18:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.805 02:18:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.062 02:18:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.319 02:18:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:44.576 02:18:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:44.576 02:18:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:45.140 02:18:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:45.140 [2024-07-11 02:18:35.427210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:45.140 [2024-07-11 02:18:35.516237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.140 [2024-07-11 02:18:35.516238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.397 [2024-07-11 02:18:35.562666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:45.397 [2024-07-11 02:18:35.562726] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:47.925 02:18:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:47.925 02:18:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:47.925 spdk_app_start Round 1 00:12:47.925 02:18:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1748927 /var/tmp/spdk-nbd.sock 00:12:47.925 02:18:38 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1748927 ']' 00:12:47.925 02:18:38 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:47.925 02:18:38 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.925 02:18:38 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:47.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:47.925 02:18:38 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.925 02:18:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:48.184 02:18:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.184 02:18:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:12:48.184 02:18:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:48.750 Malloc0 00:12:48.750 02:18:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:49.009 Malloc1 00:12:49.009 02:18:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:49.009 02:18:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:49.268 /dev/nbd0 00:12:49.268 02:18:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:49.268 02:18:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:49.268 1+0 records in 00:12:49.268 1+0 records out 00:12:49.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218492 s, 18.7 MB/s 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:49.268 02:18:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:12:49.268 02:18:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.268 02:18:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:49.268 02:18:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:49.526 /dev/nbd1 00:12:49.526 02:18:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:49.526 02:18:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:49.526 1+0 records in 00:12:49.526 1+0 records out 00:12:49.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264501 s, 15.5 MB/s 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:49.526 02:18:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:12:49.527 02:18:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:49.527 02:18:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:49.527 02:18:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:12:49.527 02:18:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.527 02:18:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:49.527 02:18:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:49.527 02:18:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.527 02:18:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:49.785 { 00:12:49.785 "nbd_device": "/dev/nbd0", 00:12:49.785 "bdev_name": "Malloc0" 00:12:49.785 }, 00:12:49.785 { 00:12:49.785 "nbd_device": "/dev/nbd1", 00:12:49.785 "bdev_name": "Malloc1" 00:12:49.785 } 00:12:49.785 ]' 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:49.785 { 00:12:49.785 "nbd_device": "/dev/nbd0", 00:12:49.785 "bdev_name": "Malloc0" 00:12:49.785 }, 00:12:49.785 { 00:12:49.785 "nbd_device": "/dev/nbd1", 00:12:49.785 "bdev_name": "Malloc1" 00:12:49.785 } 00:12:49.785 ]' 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:49.785 /dev/nbd1' 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:49.785 /dev/nbd1' 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:49.785 02:18:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:50.043 256+0 records in 00:12:50.043 256+0 records out 00:12:50.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050015 s, 210 MB/s 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:50.043 256+0 records in 00:12:50.043 256+0 records out 00:12:50.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247669 s, 42.3 MB/s 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:50.043 256+0 records in 00:12:50.043 256+0 records out 00:12:50.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265974 s, 39.4 MB/s 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:50.043 02:18:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.044 02:18:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:50.044 02:18:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.044 02:18:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:50.044 02:18:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.044 02:18:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.302 02:18:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.560 02:18:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:50.818 02:18:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:50.818 02:18:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:50.818 02:18:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:51.076 02:18:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:51.076 02:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:51.076 02:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:51.076 02:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:51.076 02:18:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:51.076 02:18:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:51.076 02:18:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:51.076 02:18:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:51.076 02:18:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:51.076 02:18:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:51.334 02:18:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:51.334 [2024-07-11 02:18:41.732894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:51.592 [2024-07-11 02:18:41.822265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.592 [2024-07-11 02:18:41.822269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.592 [2024-07-11 02:18:41.873570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:51.592 [2024-07-11 02:18:41.873644] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:54.873 02:18:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:54.873 02:18:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:54.873 spdk_app_start Round 2 00:12:54.873 02:18:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1748927 /var/tmp/spdk-nbd.sock 00:12:54.873 02:18:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1748927 ']' 00:12:54.873 02:18:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:54.873 02:18:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:54.873 02:18:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:54.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:54.873 02:18:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:54.873 02:18:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:54.873 02:18:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.873 02:18:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:12:54.873 02:18:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:54.873 Malloc0 00:12:54.873 02:18:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:55.132 Malloc1 00:12:55.132 02:18:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.132 02:18:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:55.390 /dev/nbd0 00:12:55.647 02:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:55.647 02:18:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:55.647 1+0 records in 00:12:55.647 1+0 records out 00:12:55.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154447 s, 26.5 MB/s 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:55.647 02:18:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:12:55.647 02:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.647 02:18:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.647 02:18:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:55.905 /dev/nbd1 00:12:55.905 02:18:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:55.905 02:18:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:55.905 1+0 records in 00:12:55.905 1+0 records out 00:12:55.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207334 s, 19.8 MB/s 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:55.905 02:18:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:12:55.905 02:18:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.905 02:18:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.905 02:18:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:55.905 02:18:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.905 02:18:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:56.164 { 00:12:56.164 "nbd_device": "/dev/nbd0", 00:12:56.164 "bdev_name": "Malloc0" 00:12:56.164 }, 00:12:56.164 { 00:12:56.164 "nbd_device": "/dev/nbd1", 00:12:56.164 "bdev_name": "Malloc1" 00:12:56.164 } 00:12:56.164 ]' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:56.164 { 00:12:56.164 "nbd_device": "/dev/nbd0", 00:12:56.164 "bdev_name": "Malloc0" 00:12:56.164 }, 00:12:56.164 { 00:12:56.164 "nbd_device": "/dev/nbd1", 00:12:56.164 "bdev_name": "Malloc1" 00:12:56.164 } 00:12:56.164 ]' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:56.164 /dev/nbd1' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:56.164 /dev/nbd1' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:56.164 256+0 records in 00:12:56.164 256+0 records out 00:12:56.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510779 s, 205 MB/s 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:56.164 256+0 records in 00:12:56.164 256+0 records out 00:12:56.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248283 s, 42.2 MB/s 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:56.164 256+0 records in 00:12:56.164 256+0 records out 00:12:56.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263581 s, 39.8 MB/s 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.164 02:18:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.730 02:18:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:56.988 02:18:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:57.245 02:18:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:57.245 02:18:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:57.501 02:18:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:57.757 [2024-07-11 02:18:48.031556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:57.757 [2024-07-11 02:18:48.120973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.757 [2024-07-11 02:18:48.121004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.757 [2024-07-11 02:18:48.171486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:57.757 [2024-07-11 02:18:48.171564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:01.037 02:18:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1748927 /var/tmp/spdk-nbd.sock 00:13:01.037 02:18:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1748927 ']' 00:13:01.037 02:18:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:01.037 02:18:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.037 02:18:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:01.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:01.037 02:18:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.037 02:18:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:13:01.037 02:18:51 event.app_repeat -- event/event.sh@39 -- # killprocess 1748927 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1748927 ']' 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1748927 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1748927 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1748927' 00:13:01.037 killing process with pid 1748927 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1748927 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1748927 00:13:01.037 spdk_app_start is called in Round 0. 00:13:01.037 Shutdown signal received, stop current app iteration 00:13:01.037 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 reinitialization... 00:13:01.037 spdk_app_start is called in Round 1. 00:13:01.037 Shutdown signal received, stop current app iteration 00:13:01.037 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 reinitialization... 00:13:01.037 spdk_app_start is called in Round 2. 00:13:01.037 Shutdown signal received, stop current app iteration 00:13:01.037 Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 reinitialization... 00:13:01.037 spdk_app_start is called in Round 3. 00:13:01.037 Shutdown signal received, stop current app iteration 00:13:01.037 02:18:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:13:01.037 02:18:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:13:01.037 00:13:01.037 real 0m19.423s 00:13:01.037 user 0m43.362s 00:13:01.037 sys 0m3.512s 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.037 02:18:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:01.037 ************************************ 00:13:01.037 END TEST app_repeat 00:13:01.037 ************************************ 00:13:01.037 02:18:51 event -- common/autotest_common.sh@1142 -- # return 0 00:13:01.037 02:18:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:13:01.037 02:18:51 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:13:01.037 02:18:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:01.037 02:18:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.037 02:18:51 event -- common/autotest_common.sh@10 -- # set +x 00:13:01.037 ************************************ 00:13:01.037 START TEST cpu_locks 00:13:01.037 ************************************ 00:13:01.037 02:18:51 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:13:01.295 * Looking for test storage... 00:13:01.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:13:01.295 02:18:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:13:01.295 02:18:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:13:01.295 02:18:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:13:01.295 02:18:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:13:01.295 02:18:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:01.295 02:18:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.295 02:18:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:01.295 ************************************ 00:13:01.295 START TEST default_locks 00:13:01.295 ************************************ 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1750958 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1750958 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1750958 ']' 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.295 02:18:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:01.295 [2024-07-11 02:18:51.562750] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:01.295 [2024-07-11 02:18:51.562864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750958 ] 00:13:01.295 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.295 [2024-07-11 02:18:51.623094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.295 [2024-07-11 02:18:51.713923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.553 02:18:51 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.553 02:18:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:13:01.553 02:18:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1750958 00:13:01.553 02:18:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1750958 00:13:01.553 02:18:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:02.121 lslocks: write error 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1750958 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1750958 ']' 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1750958 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1750958 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1750958' 00:13:02.121 killing process with pid 1750958 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1750958 00:13:02.121 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1750958 00:13:02.122 02:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1750958 00:13:02.122 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:13:02.122 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1750958 00:13:02.122 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1750958 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1750958 ']' 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:02.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1750958) - No such process 00:13:02.395 ERROR: process (pid: 1750958) is no longer running 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:02.395 00:13:02.395 real 0m1.045s 00:13:02.395 user 0m1.045s 00:13:02.395 sys 0m0.532s 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.395 02:18:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:02.395 ************************************ 00:13:02.395 END TEST default_locks 00:13:02.395 ************************************ 00:13:02.395 02:18:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:13:02.395 02:18:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:13:02.395 02:18:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:02.395 02:18:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.395 02:18:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:02.395 ************************************ 00:13:02.395 START TEST default_locks_via_rpc 00:13:02.395 ************************************ 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1751090 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1751090 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1751090 ']' 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.395 02:18:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.395 [2024-07-11 02:18:52.663700] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:02.395 [2024-07-11 02:18:52.663802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751090 ] 00:13:02.395 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.395 [2024-07-11 02:18:52.726486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.679 [2024-07-11 02:18:52.818058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1751090 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1751090 00:13:02.679 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1751090 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1751090 ']' 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1751090 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1751090 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1751090' 00:13:03.251 killing process with pid 1751090 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1751090 00:13:03.251 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1751090 00:13:03.513 00:13:03.513 real 0m1.078s 00:13:03.513 user 0m1.093s 00:13:03.513 sys 0m0.512s 00:13:03.513 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.513 02:18:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.513 ************************************ 00:13:03.513 END TEST default_locks_via_rpc 00:13:03.513 ************************************ 00:13:03.513 02:18:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:13:03.513 02:18:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:13:03.513 02:18:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:03.513 02:18:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.513 02:18:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:03.513 ************************************ 00:13:03.513 START TEST non_locking_app_on_locked_coremask 00:13:03.513 ************************************ 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1751220 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1751220 /var/tmp/spdk.sock 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1751220 ']' 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.513 02:18:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:03.513 [2024-07-11 02:18:53.798728] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:03.513 [2024-07-11 02:18:53.798829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751220 ] 00:13:03.513 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.513 [2024-07-11 02:18:53.858804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.772 [2024-07-11 02:18:53.949599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1751227 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1751227 /var/tmp/spdk2.sock 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1751227 ']' 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:03.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.772 02:18:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:04.030 [2024-07-11 02:18:54.222592] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:04.030 [2024-07-11 02:18:54.222685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751227 ] 00:13:04.030 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.030 [2024-07-11 02:18:54.308135] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:04.030 [2024-07-11 02:18:54.308168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.288 [2024-07-11 02:18:54.483426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.854 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.854 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:13:04.854 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1751220 00:13:04.854 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1751220 00:13:04.854 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:05.787 lslocks: write error 00:13:05.787 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1751220 00:13:05.787 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1751220 ']' 00:13:05.787 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1751220 00:13:05.788 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:13:05.788 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:05.788 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1751220 00:13:05.788 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:05.788 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:05.788 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1751220' 00:13:05.788 killing process with pid 1751220 00:13:05.788 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1751220 00:13:05.788 02:18:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1751220 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1751227 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1751227 ']' 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1751227 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1751227 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1751227' 00:13:06.046 killing process with pid 1751227 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1751227 00:13:06.046 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1751227 00:13:06.306 00:13:06.306 real 0m2.983s 00:13:06.306 user 0m3.341s 00:13:06.306 sys 0m1.058s 00:13:06.306 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:06.306 02:18:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:06.306 ************************************ 00:13:06.306 END TEST non_locking_app_on_locked_coremask 00:13:06.306 ************************************ 00:13:06.566 02:18:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:13:06.566 02:18:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:06.566 02:18:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:06.566 02:18:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.566 02:18:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:06.566 ************************************ 00:13:06.566 START TEST locking_app_on_unlocked_coremask 00:13:06.566 ************************************ 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1751474 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1751474 /var/tmp/spdk.sock 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1751474 ']' 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.566 02:18:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:06.566 [2024-07-11 02:18:56.832989] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:06.566 [2024-07-11 02:18:56.833072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751474 ] 00:13:06.566 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.566 [2024-07-11 02:18:56.893013] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:06.566 [2024-07-11 02:18:56.893057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.566 [2024-07-11 02:18:56.980418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1751561 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1751561 /var/tmp/spdk2.sock 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1751561 ']' 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:06.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:06.825 02:18:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:07.083 [2024-07-11 02:18:57.256147] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:07.083 [2024-07-11 02:18:57.256247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751561 ] 00:13:07.083 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.083 [2024-07-11 02:18:57.347211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.341 [2024-07-11 02:18:57.522797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.908 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.908 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:13:07.908 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1751561 00:13:07.908 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:07.908 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1751561 00:13:08.843 lslocks: write error 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1751474 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1751474 ']' 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1751474 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1751474 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1751474' 00:13:08.843 killing process with pid 1751474 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1751474 00:13:08.843 02:18:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1751474 00:13:09.101 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1751561 00:13:09.101 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1751561 ']' 00:13:09.101 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1751561 00:13:09.101 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:13:09.101 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:09.101 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1751561 00:13:09.102 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:09.102 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:09.102 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1751561' 00:13:09.102 killing process with pid 1751561 00:13:09.102 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1751561 00:13:09.102 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1751561 00:13:09.362 00:13:09.362 real 0m2.986s 00:13:09.362 user 0m3.319s 00:13:09.362 sys 0m1.058s 00:13:09.362 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:09.362 02:18:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:09.362 ************************************ 00:13:09.362 END TEST locking_app_on_unlocked_coremask 00:13:09.362 ************************************ 00:13:09.621 02:18:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:13:09.621 02:18:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:13:09.621 02:18:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:09.621 02:18:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.621 02:18:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:09.621 ************************************ 00:13:09.621 START TEST locking_app_on_locked_coremask 00:13:09.621 ************************************ 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1751808 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1751808 /var/tmp/spdk.sock 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1751808 ']' 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.621 02:18:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:09.621 [2024-07-11 02:18:59.874649] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:09.621 [2024-07-11 02:18:59.874742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751808 ] 00:13:09.621 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.621 [2024-07-11 02:18:59.933041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.621 [2024-07-11 02:19:00.020853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.880 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.880 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:13:09.880 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1751896 00:13:09.880 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1751896 /var/tmp/spdk2.sock 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1751896 /var/tmp/spdk2.sock 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1751896 /var/tmp/spdk2.sock 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1751896 ']' 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:09.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.881 02:19:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:09.881 [2024-07-11 02:19:00.294665] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:09.881 [2024-07-11 02:19:00.294769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751896 ] 00:13:10.139 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.139 [2024-07-11 02:19:00.383937] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1751808 has claimed it. 00:13:10.139 [2024-07-11 02:19:00.384009] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:10.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1751896) - No such process 00:13:10.705 ERROR: process (pid: 1751896) is no longer running 00:13:10.705 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.705 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:13:10.705 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:13:10.705 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.705 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:10.705 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.705 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1751808 00:13:10.705 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1751808 00:13:10.705 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:10.963 lslocks: write error 00:13:10.963 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1751808 00:13:10.963 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1751808 ']' 00:13:10.963 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1751808 00:13:10.963 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:13:11.222 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.222 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1751808 00:13:11.222 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.222 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.222 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1751808' 00:13:11.222 killing process with pid 1751808 00:13:11.222 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1751808 00:13:11.222 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1751808 00:13:11.481 00:13:11.481 real 0m1.857s 00:13:11.481 user 0m2.144s 00:13:11.481 sys 0m0.617s 00:13:11.481 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.481 02:19:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:11.481 ************************************ 00:13:11.481 END TEST locking_app_on_locked_coremask 00:13:11.481 ************************************ 00:13:11.481 02:19:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:13:11.481 02:19:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:11.481 02:19:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:11.481 02:19:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.481 02:19:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:11.481 ************************************ 00:13:11.481 START TEST locking_overlapped_coremask 00:13:11.481 ************************************ 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1752037 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1752037 /var/tmp/spdk.sock 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1752037 ']' 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.481 02:19:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:11.481 [2024-07-11 02:19:01.784080] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:11.481 [2024-07-11 02:19:01.784171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752037 ] 00:13:11.481 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.481 [2024-07-11 02:19:01.842929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.739 [2024-07-11 02:19:01.932139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.739 [2024-07-11 02:19:01.932206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.739 [2024-07-11 02:19:01.932209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1752047 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1752047 /var/tmp/spdk2.sock 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1752047 /var/tmp/spdk2.sock 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1752047 /var/tmp/spdk2.sock 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1752047 ']' 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:11.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.739 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:11.997 [2024-07-11 02:19:02.211297] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:11.997 [2024-07-11 02:19:02.211405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752047 ] 00:13:11.997 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.997 [2024-07-11 02:19:02.303003] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1752037 has claimed it. 00:13:11.997 [2024-07-11 02:19:02.303058] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:12.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1752047) - No such process 00:13:12.563 ERROR: process (pid: 1752047) is no longer running 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1752037 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1752037 ']' 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1752037 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752037 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752037' 00:13:12.563 killing process with pid 1752037 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1752037 00:13:12.563 02:19:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1752037 00:13:12.822 00:13:12.822 real 0m1.515s 00:13:12.822 user 0m4.208s 00:13:12.822 sys 0m0.433s 00:13:12.822 02:19:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:13.081 ************************************ 00:13:13.081 END TEST locking_overlapped_coremask 00:13:13.081 ************************************ 00:13:13.081 02:19:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:13:13.081 02:19:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:13.081 02:19:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:13.081 02:19:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.081 02:19:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:13.081 ************************************ 00:13:13.081 START TEST locking_overlapped_coremask_via_rpc 00:13:13.081 ************************************ 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1752177 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1752177 /var/tmp/spdk.sock 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1752177 ']' 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.081 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.081 [2024-07-11 02:19:03.357987] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:13.081 [2024-07-11 02:19:03.358092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752177 ] 00:13:13.081 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.081 [2024-07-11 02:19:03.419731] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:13.081 [2024-07-11 02:19:03.419781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.348 [2024-07-11 02:19:03.510674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.348 [2024-07-11 02:19:03.510764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.348 [2024-07-11 02:19:03.510799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1752273 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1752273 /var/tmp/spdk2.sock 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1752273 ']' 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:13.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.348 02:19:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.609 [2024-07-11 02:19:03.783260] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:13.609 [2024-07-11 02:19:03.783364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752273 ] 00:13:13.609 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.609 [2024-07-11 02:19:03.873813] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:13.609 [2024-07-11 02:19:03.873860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.867 [2024-07-11 02:19:04.051181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.867 [2024-07-11 02:19:04.054569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:13.867 [2024-07-11 02:19:04.054572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.432 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.432 [2024-07-11 02:19:04.832613] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1752177 has claimed it. 00:13:14.432 request: 00:13:14.432 { 00:13:14.432 "method": "framework_enable_cpumask_locks", 00:13:14.432 "req_id": 1 00:13:14.432 } 00:13:14.432 Got JSON-RPC error response 00:13:14.432 response: 00:13:14.432 { 00:13:14.432 "code": -32603, 00:13:14.432 "message": "Failed to claim CPU core: 2" 00:13:14.432 } 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1752177 /var/tmp/spdk.sock 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1752177 ']' 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.433 02:19:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.997 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.997 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:14.997 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1752273 /var/tmp/spdk2.sock 00:13:14.997 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1752273 ']' 00:13:14.997 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:14.997 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.997 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:14.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:14.997 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.997 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.254 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.254 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:15.254 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:15.254 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:15.254 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:15.254 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:15.254 00:13:15.254 real 0m2.151s 00:13:15.254 user 0m1.253s 00:13:15.254 sys 0m0.195s 00:13:15.254 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.254 02:19:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.254 ************************************ 00:13:15.254 END TEST locking_overlapped_coremask_via_rpc 00:13:15.254 ************************************ 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:13:15.254 02:19:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:15.254 02:19:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1752177 ]] 00:13:15.254 02:19:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1752177 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1752177 ']' 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1752177 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752177 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752177' 00:13:15.254 killing process with pid 1752177 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1752177 00:13:15.254 02:19:05 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1752177 00:13:15.511 02:19:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1752273 ]] 00:13:15.511 02:19:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1752273 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1752273 ']' 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1752273 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752273 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752273' 00:13:15.511 killing process with pid 1752273 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1752273 00:13:15.511 02:19:05 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1752273 00:13:15.769 02:19:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:15.769 02:19:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:15.769 02:19:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1752177 ]] 00:13:15.769 02:19:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1752177 00:13:15.769 02:19:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1752177 ']' 00:13:15.769 02:19:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1752177 00:13:15.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1752177) - No such process 00:13:15.769 02:19:06 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1752177 is not found' 00:13:15.769 Process with pid 1752177 is not found 00:13:15.769 02:19:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1752273 ]] 00:13:15.769 02:19:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1752273 00:13:15.769 02:19:06 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1752273 ']' 00:13:15.770 02:19:06 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1752273 00:13:15.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1752273) - No such process 00:13:15.770 02:19:06 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1752273 is not found' 00:13:15.770 Process with pid 1752273 is not found 00:13:15.770 02:19:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:15.770 00:13:15.770 real 0m14.670s 00:13:15.770 user 0m27.209s 00:13:15.770 sys 0m5.266s 00:13:15.770 02:19:06 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.770 02:19:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:15.770 ************************************ 00:13:15.770 END TEST cpu_locks 00:13:15.770 ************************************ 00:13:15.770 02:19:06 event -- common/autotest_common.sh@1142 -- # return 0 00:13:15.770 00:13:15.770 real 0m40.412s 00:13:15.770 user 1m19.928s 00:13:15.770 sys 0m9.548s 00:13:15.770 02:19:06 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.770 02:19:06 event -- common/autotest_common.sh@10 -- # set +x 00:13:15.770 ************************************ 00:13:15.770 END TEST event 00:13:15.770 ************************************ 00:13:15.770 02:19:06 -- common/autotest_common.sh@1142 -- # return 0 00:13:15.770 02:19:06 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:13:15.770 02:19:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:15.770 02:19:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.770 02:19:06 -- common/autotest_common.sh@10 -- # set +x 00:13:15.770 ************************************ 00:13:15.770 START TEST thread 00:13:15.770 ************************************ 00:13:15.770 02:19:06 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:13:16.027 * Looking for test storage... 00:13:16.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:13:16.027 02:19:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:16.027 02:19:06 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:13:16.027 02:19:06 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.027 02:19:06 thread -- common/autotest_common.sh@10 -- # set +x 00:13:16.027 ************************************ 00:13:16.027 START TEST thread_poller_perf 00:13:16.027 ************************************ 00:13:16.027 02:19:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:16.027 [2024-07-11 02:19:06.260469] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:16.027 [2024-07-11 02:19:06.260552] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752575 ] 00:13:16.027 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.027 [2024-07-11 02:19:06.321113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.027 [2024-07-11 02:19:06.410175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.027 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:17.397 ====================================== 00:13:17.397 busy:2709836988 (cyc) 00:13:17.397 total_run_count: 261000 00:13:17.397 tsc_hz: 2700000000 (cyc) 00:13:17.397 ====================================== 00:13:17.397 poller_cost: 10382 (cyc), 3845 (nsec) 00:13:17.397 00:13:17.397 real 0m1.232s 00:13:17.397 user 0m1.150s 00:13:17.397 sys 0m0.071s 00:13:17.397 02:19:07 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.397 02:19:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:17.397 ************************************ 00:13:17.397 END TEST thread_poller_perf 00:13:17.397 ************************************ 00:13:17.397 02:19:07 thread -- common/autotest_common.sh@1142 -- # return 0 00:13:17.397 02:19:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:17.397 02:19:07 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:13:17.397 02:19:07 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.397 02:19:07 thread -- common/autotest_common.sh@10 -- # set +x 00:13:17.397 ************************************ 00:13:17.397 START TEST thread_poller_perf 00:13:17.397 ************************************ 00:13:17.397 02:19:07 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:17.397 [2024-07-11 02:19:07.544308] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:17.397 [2024-07-11 02:19:07.544384] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752703 ] 00:13:17.397 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.397 [2024-07-11 02:19:07.603387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.397 [2024-07-11 02:19:07.693883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.397 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:18.771 ====================================== 00:13:18.771 busy:2702792676 (cyc) 00:13:18.771 total_run_count: 3672000 00:13:18.771 tsc_hz: 2700000000 (cyc) 00:13:18.771 ====================================== 00:13:18.771 poller_cost: 736 (cyc), 272 (nsec) 00:13:18.771 00:13:18.771 real 0m1.228s 00:13:18.771 user 0m1.155s 00:13:18.771 sys 0m0.066s 00:13:18.771 02:19:08 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.771 02:19:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:18.771 ************************************ 00:13:18.771 END TEST thread_poller_perf 00:13:18.771 ************************************ 00:13:18.771 02:19:08 thread -- common/autotest_common.sh@1142 -- # return 0 00:13:18.771 02:19:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:18.771 00:13:18.771 real 0m2.621s 00:13:18.771 user 0m2.367s 00:13:18.771 sys 0m0.246s 00:13:18.771 02:19:08 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.771 02:19:08 thread -- common/autotest_common.sh@10 -- # set +x 00:13:18.771 ************************************ 00:13:18.771 END TEST thread 00:13:18.771 ************************************ 00:13:18.771 02:19:08 -- common/autotest_common.sh@1142 -- # return 0 00:13:18.771 02:19:08 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:13:18.771 02:19:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:18.771 02:19:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.771 02:19:08 -- common/autotest_common.sh@10 -- # set +x 00:13:18.771 ************************************ 00:13:18.771 START TEST accel 00:13:18.771 ************************************ 00:13:18.771 02:19:08 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:13:18.771 * Looking for test storage... 00:13:18.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:13:18.771 02:19:08 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:13:18.771 02:19:08 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:13:18.771 02:19:08 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:18.772 02:19:08 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1752868 00:13:18.772 02:19:08 accel -- accel/accel.sh@63 -- # waitforlisten 1752868 00:13:18.772 02:19:08 accel -- common/autotest_common.sh@829 -- # '[' -z 1752868 ']' 00:13:18.772 02:19:08 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.772 02:19:08 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:13:18.772 02:19:08 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.772 02:19:08 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.772 02:19:08 accel -- accel/accel.sh@61 -- # build_accel_config 00:13:18.772 02:19:08 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.772 02:19:08 accel -- common/autotest_common.sh@10 -- # set +x 00:13:18.772 02:19:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:18.772 02:19:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:18.772 02:19:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:18.772 02:19:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:18.772 02:19:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:18.772 02:19:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:18.772 02:19:08 accel -- accel/accel.sh@41 -- # jq -r . 00:13:18.772 [2024-07-11 02:19:08.951840] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:18.772 [2024-07-11 02:19:08.951934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752868 ] 00:13:18.772 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.772 [2024-07-11 02:19:09.011980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.772 [2024-07-11 02:19:09.103262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@862 -- # return 0 00:13:19.031 02:19:09 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:13:19.031 02:19:09 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:13:19.031 02:19:09 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:13:19.031 02:19:09 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:13:19.031 02:19:09 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:13:19.031 02:19:09 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:13:19.031 02:19:09 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@10 -- # set +x 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # IFS== 00:13:19.031 02:19:09 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:19.031 02:19:09 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:19.031 02:19:09 accel -- accel/accel.sh@75 -- # killprocess 1752868 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@948 -- # '[' -z 1752868 ']' 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@952 -- # kill -0 1752868 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@953 -- # uname 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752868 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752868' 00:13:19.031 killing process with pid 1752868 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@967 -- # kill 1752868 00:13:19.031 02:19:09 accel -- common/autotest_common.sh@972 -- # wait 1752868 00:13:19.291 02:19:09 accel -- accel/accel.sh@76 -- # trap - ERR 00:13:19.291 02:19:09 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:13:19.291 02:19:09 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:19.291 02:19:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.291 02:19:09 accel -- common/autotest_common.sh@10 -- # set +x 00:13:19.291 02:19:09 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:13:19.291 02:19:09 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:13:19.291 02:19:09 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:13:19.291 02:19:09 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:19.291 02:19:09 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:19.291 02:19:09 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:19.291 02:19:09 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:19.291 02:19:09 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:19.291 02:19:09 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:13:19.291 02:19:09 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:13:19.291 02:19:09 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.291 02:19:09 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:13:19.550 02:19:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:19.550 02:19:09 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:13:19.550 02:19:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:19.550 02:19:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.550 02:19:09 accel -- common/autotest_common.sh@10 -- # set +x 00:13:19.550 ************************************ 00:13:19.550 START TEST accel_missing_filename 00:13:19.550 ************************************ 00:13:19.550 02:19:09 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:13:19.550 02:19:09 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:13:19.550 02:19:09 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:13:19.550 02:19:09 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:19.550 02:19:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.550 02:19:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:19.550 02:19:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.550 02:19:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:13:19.550 02:19:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:13:19.550 02:19:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:13:19.550 02:19:09 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:19.550 02:19:09 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:19.550 02:19:09 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:19.550 02:19:09 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:19.550 02:19:09 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:19.550 02:19:09 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:13:19.550 02:19:09 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:13:19.550 [2024-07-11 02:19:09.775209] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:19.550 [2024-07-11 02:19:09.775284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753004 ] 00:13:19.550 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.550 [2024-07-11 02:19:09.832939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.550 [2024-07-11 02:19:09.923814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.809 [2024-07-11 02:19:09.975610] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:19.809 [2024-07-11 02:19:10.025821] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:13:19.809 A filename is required. 00:13:19.809 02:19:10 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:13:19.809 02:19:10 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:19.809 02:19:10 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:13:19.809 02:19:10 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:13:19.809 02:19:10 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:13:19.809 02:19:10 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:19.809 00:13:19.809 real 0m0.332s 00:13:19.809 user 0m0.239s 00:13:19.809 sys 0m0.129s 00:13:19.809 02:19:10 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.809 02:19:10 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:13:19.809 ************************************ 00:13:19.809 END TEST accel_missing_filename 00:13:19.809 ************************************ 00:13:19.809 02:19:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:19.809 02:19:10 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:19.809 02:19:10 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:13:19.809 02:19:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.809 02:19:10 accel -- common/autotest_common.sh@10 -- # set +x 00:13:19.809 ************************************ 00:13:19.809 START TEST accel_compress_verify 00:13:19.809 ************************************ 00:13:19.809 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:19.809 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:13:19.809 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:19.809 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:19.809 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.809 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:19.809 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.809 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:19.809 02:19:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:19.809 02:19:10 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:19.809 02:19:10 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:19.809 02:19:10 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:19.809 02:19:10 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:19.809 02:19:10 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:19.809 02:19:10 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:19.809 02:19:10 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:19.809 02:19:10 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:13:19.809 [2024-07-11 02:19:10.155426] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:19.809 [2024-07-11 02:19:10.155507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753110 ] 00:13:19.809 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.809 [2024-07-11 02:19:10.214066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.068 [2024-07-11 02:19:10.304829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.068 [2024-07-11 02:19:10.356486] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:20.068 [2024-07-11 02:19:10.405838] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:13:20.068 00:13:20.068 Compression does not support the verify option, aborting. 00:13:20.068 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:13:20.068 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.068 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:13:20.068 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:13:20.068 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:13:20.068 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.068 00:13:20.068 real 0m0.333s 00:13:20.068 user 0m0.235s 00:13:20.068 sys 0m0.134s 00:13:20.068 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.068 02:19:10 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:13:20.068 ************************************ 00:13:20.068 END TEST accel_compress_verify 00:13:20.068 ************************************ 00:13:20.327 02:19:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:20.328 02:19:10 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@10 -- # set +x 00:13:20.328 ************************************ 00:13:20.328 START TEST accel_wrong_workload 00:13:20.328 ************************************ 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:13:20.328 02:19:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:13:20.328 02:19:10 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:13:20.328 02:19:10 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:20.328 02:19:10 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:20.328 02:19:10 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:20.328 02:19:10 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:20.328 02:19:10 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:20.328 02:19:10 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:13:20.328 02:19:10 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:13:20.328 Unsupported workload type: foobar 00:13:20.328 [2024-07-11 02:19:10.540264] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:13:20.328 accel_perf options: 00:13:20.328 [-h help message] 00:13:20.328 [-q queue depth per core] 00:13:20.328 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:20.328 [-T number of threads per core 00:13:20.328 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:20.328 [-t time in seconds] 00:13:20.328 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:20.328 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:13:20.328 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:20.328 [-l for compress/decompress workloads, name of uncompressed input file 00:13:20.328 [-S for crc32c workload, use this seed value (default 0) 00:13:20.328 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:20.328 [-f for fill workload, use this BYTE value (default 255) 00:13:20.328 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:20.328 [-y verify result if this switch is on] 00:13:20.328 [-a tasks to allocate per core (default: same value as -q)] 00:13:20.328 Can be used to spread operations across a wider range of memory. 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.328 00:13:20.328 real 0m0.023s 00:13:20.328 user 0m0.013s 00:13:20.328 sys 0m0.010s 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.328 02:19:10 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:13:20.328 ************************************ 00:13:20.328 END TEST accel_wrong_workload 00:13:20.328 ************************************ 00:13:20.328 Error: writing output failed: Broken pipe 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:20.328 02:19:10 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@10 -- # set +x 00:13:20.328 ************************************ 00:13:20.328 START TEST accel_negative_buffers 00:13:20.328 ************************************ 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:13:20.328 02:19:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:13:20.328 02:19:10 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:13:20.328 02:19:10 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:20.328 02:19:10 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:20.328 02:19:10 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:20.328 02:19:10 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:20.328 02:19:10 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:20.328 02:19:10 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:13:20.328 02:19:10 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:13:20.328 -x option must be non-negative. 00:13:20.328 [2024-07-11 02:19:10.615631] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:13:20.328 accel_perf options: 00:13:20.328 [-h help message] 00:13:20.328 [-q queue depth per core] 00:13:20.328 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:20.328 [-T number of threads per core 00:13:20.328 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:20.328 [-t time in seconds] 00:13:20.328 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:20.328 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:13:20.328 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:20.328 [-l for compress/decompress workloads, name of uncompressed input file 00:13:20.328 [-S for crc32c workload, use this seed value (default 0) 00:13:20.328 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:20.328 [-f for fill workload, use this BYTE value (default 255) 00:13:20.328 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:20.328 [-y verify result if this switch is on] 00:13:20.328 [-a tasks to allocate per core (default: same value as -q)] 00:13:20.328 Can be used to spread operations across a wider range of memory. 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.328 00:13:20.328 real 0m0.025s 00:13:20.328 user 0m0.010s 00:13:20.328 sys 0m0.015s 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.328 02:19:10 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:13:20.328 ************************************ 00:13:20.328 END TEST accel_negative_buffers 00:13:20.328 ************************************ 00:13:20.328 Error: writing output failed: Broken pipe 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:20.328 02:19:10 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.328 02:19:10 accel -- common/autotest_common.sh@10 -- # set +x 00:13:20.328 ************************************ 00:13:20.328 START TEST accel_crc32c 00:13:20.328 ************************************ 00:13:20.328 02:19:10 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:13:20.328 02:19:10 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:20.328 02:19:10 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:20.328 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:20.329 02:19:10 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:20.329 [2024-07-11 02:19:10.684072] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:20.329 [2024-07-11 02:19:10.684142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753175 ] 00:13:20.329 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.329 [2024-07-11 02:19:10.742898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.588 [2024-07-11 02:19:10.832621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.588 02:19:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:22.004 02:19:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:22.004 00:13:22.004 real 0m1.331s 00:13:22.004 user 0m1.211s 00:13:22.004 sys 0m0.122s 00:13:22.004 02:19:11 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.004 02:19:11 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:22.004 ************************************ 00:13:22.004 END TEST accel_crc32c 00:13:22.004 ************************************ 00:13:22.004 02:19:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:22.004 02:19:12 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:13:22.004 02:19:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:13:22.004 02:19:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.004 02:19:12 accel -- common/autotest_common.sh@10 -- # set +x 00:13:22.004 ************************************ 00:13:22.004 START TEST accel_crc32c_C2 00:13:22.004 ************************************ 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:22.004 [2024-07-11 02:19:12.064897] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:22.004 [2024-07-11 02:19:12.064968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753306 ] 00:13:22.004 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.004 [2024-07-11 02:19:12.122902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.004 [2024-07-11 02:19:12.213816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.004 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:22.005 02:19:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:23.379 00:13:23.379 real 0m1.335s 00:13:23.379 user 0m1.213s 00:13:23.379 sys 0m0.123s 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.379 02:19:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:23.379 ************************************ 00:13:23.379 END TEST accel_crc32c_C2 00:13:23.379 ************************************ 00:13:23.379 02:19:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:23.379 02:19:13 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:13:23.379 02:19:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:23.379 02:19:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.379 02:19:13 accel -- common/autotest_common.sh@10 -- # set +x 00:13:23.379 ************************************ 00:13:23.379 START TEST accel_copy 00:13:23.379 ************************************ 00:13:23.379 02:19:13 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:23.379 02:19:13 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:13:23.380 [2024-07-11 02:19:13.451809] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:23.380 [2024-07-11 02:19:13.451880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753476 ] 00:13:23.380 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.380 [2024-07-11 02:19:13.511146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.380 [2024-07-11 02:19:13.602082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:23.380 02:19:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.755 02:19:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:13:24.756 02:19:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:24.756 00:13:24.756 real 0m1.337s 00:13:24.756 user 0m1.209s 00:13:24.756 sys 0m0.129s 00:13:24.756 02:19:14 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.756 02:19:14 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:13:24.756 ************************************ 00:13:24.756 END TEST accel_copy 00:13:24.756 ************************************ 00:13:24.756 02:19:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:24.756 02:19:14 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:24.756 02:19:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:24.756 02:19:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.756 02:19:14 accel -- common/autotest_common.sh@10 -- # set +x 00:13:24.756 ************************************ 00:13:24.756 START TEST accel_fill 00:13:24.756 ************************************ 00:13:24.756 02:19:14 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:13:24.756 02:19:14 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:13:24.756 [2024-07-11 02:19:14.838358] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:24.756 [2024-07-11 02:19:14.838430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753635 ] 00:13:24.756 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.756 [2024-07-11 02:19:14.896192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.756 [2024-07-11 02:19:14.987071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:24.756 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:24.757 02:19:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:13:26.132 02:19:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:26.132 00:13:26.132 real 0m1.336s 00:13:26.132 user 0m1.205s 00:13:26.132 sys 0m0.132s 00:13:26.132 02:19:16 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.132 02:19:16 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:13:26.132 ************************************ 00:13:26.132 END TEST accel_fill 00:13:26.132 ************************************ 00:13:26.132 02:19:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:26.132 02:19:16 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:13:26.132 02:19:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:26.132 02:19:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.132 02:19:16 accel -- common/autotest_common.sh@10 -- # set +x 00:13:26.132 ************************************ 00:13:26.132 START TEST accel_copy_crc32c 00:13:26.132 ************************************ 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:26.132 [2024-07-11 02:19:16.226990] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:26.132 [2024-07-11 02:19:16.227070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753756 ] 00:13:26.132 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.132 [2024-07-11 02:19:16.285763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.132 [2024-07-11 02:19:16.373633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:26.132 02:19:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:27.507 00:13:27.507 real 0m1.333s 00:13:27.507 user 0m1.210s 00:13:27.507 sys 0m0.126s 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.507 02:19:17 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:27.507 ************************************ 00:13:27.507 END TEST accel_copy_crc32c 00:13:27.507 ************************************ 00:13:27.507 02:19:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:27.507 02:19:17 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:13:27.507 02:19:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:13:27.507 02:19:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.507 02:19:17 accel -- common/autotest_common.sh@10 -- # set +x 00:13:27.507 ************************************ 00:13:27.507 START TEST accel_copy_crc32c_C2 00:13:27.507 ************************************ 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:27.507 [2024-07-11 02:19:17.614747] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:27.507 [2024-07-11 02:19:17.614816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753880 ] 00:13:27.507 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.507 [2024-07-11 02:19:17.674684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.507 [2024-07-11 02:19:17.764199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:27.507 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:27.508 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:27.508 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:27.508 02:19:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:28.881 00:13:28.881 real 0m1.333s 00:13:28.881 user 0m1.214s 00:13:28.881 sys 0m0.122s 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.881 02:19:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:28.881 ************************************ 00:13:28.881 END TEST accel_copy_crc32c_C2 00:13:28.881 ************************************ 00:13:28.881 02:19:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:28.881 02:19:18 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:13:28.881 02:19:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:28.881 02:19:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.881 02:19:18 accel -- common/autotest_common.sh@10 -- # set +x 00:13:28.881 ************************************ 00:13:28.881 START TEST accel_dualcast 00:13:28.881 ************************************ 00:13:28.881 02:19:18 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:13:28.881 02:19:18 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:13:28.881 [2024-07-11 02:19:19.000184] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:28.881 [2024-07-11 02:19:19.000269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754061 ] 00:13:28.881 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.881 [2024-07-11 02:19:19.057828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.881 [2024-07-11 02:19:19.149050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.881 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:28.882 02:19:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:30.255 02:19:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:13:30.256 02:19:20 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:30.256 00:13:30.256 real 0m1.336s 00:13:30.256 user 0m1.209s 00:13:30.256 sys 0m0.129s 00:13:30.256 02:19:20 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.256 02:19:20 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:13:30.256 ************************************ 00:13:30.256 END TEST accel_dualcast 00:13:30.256 ************************************ 00:13:30.256 02:19:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:30.256 02:19:20 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:13:30.256 02:19:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:30.256 02:19:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.256 02:19:20 accel -- common/autotest_common.sh@10 -- # set +x 00:13:30.256 ************************************ 00:13:30.256 START TEST accel_compare 00:13:30.256 ************************************ 00:13:30.256 02:19:20 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:13:30.256 [2024-07-11 02:19:20.391139] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:30.256 [2024-07-11 02:19:20.391215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754214 ] 00:13:30.256 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.256 [2024-07-11 02:19:20.450665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.256 [2024-07-11 02:19:20.540946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:30.256 02:19:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:13:31.653 02:19:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:31.653 00:13:31.653 real 0m1.338s 00:13:31.653 user 0m1.212s 00:13:31.653 sys 0m0.128s 00:13:31.653 02:19:21 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:31.653 02:19:21 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:13:31.653 ************************************ 00:13:31.653 END TEST accel_compare 00:13:31.653 ************************************ 00:13:31.653 02:19:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:31.653 02:19:21 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:13:31.653 02:19:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:31.653 02:19:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.653 02:19:21 accel -- common/autotest_common.sh@10 -- # set +x 00:13:31.653 ************************************ 00:13:31.653 START TEST accel_xor 00:13:31.653 ************************************ 00:13:31.653 02:19:21 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:13:31.653 02:19:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:31.653 02:19:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:31.653 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.653 02:19:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:13:31.653 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:31.654 [2024-07-11 02:19:21.780103] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:31.654 [2024-07-11 02:19:21.780171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754339 ] 00:13:31.654 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.654 [2024-07-11 02:19:21.839025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.654 [2024-07-11 02:19:21.929611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:31.654 02:19:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.027 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:33.028 00:13:33.028 real 0m1.333s 00:13:33.028 user 0m1.209s 00:13:33.028 sys 0m0.127s 00:13:33.028 02:19:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.028 02:19:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:33.028 ************************************ 00:13:33.028 END TEST accel_xor 00:13:33.028 ************************************ 00:13:33.028 02:19:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:33.028 02:19:23 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:13:33.028 02:19:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:13:33.028 02:19:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.028 02:19:23 accel -- common/autotest_common.sh@10 -- # set +x 00:13:33.028 ************************************ 00:13:33.028 START TEST accel_xor 00:13:33.028 ************************************ 00:13:33.028 02:19:23 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:33.028 [2024-07-11 02:19:23.164284] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:33.028 [2024-07-11 02:19:23.164355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754465 ] 00:13:33.028 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.028 [2024-07-11 02:19:23.224741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.028 [2024-07-11 02:19:23.315578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.028 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:33.029 02:19:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:34.402 02:19:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:34.402 00:13:34.402 real 0m1.335s 00:13:34.402 user 0m1.207s 00:13:34.402 sys 0m0.131s 00:13:34.402 02:19:24 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:34.402 02:19:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:34.402 ************************************ 00:13:34.402 END TEST accel_xor 00:13:34.402 ************************************ 00:13:34.402 02:19:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:34.402 02:19:24 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:13:34.402 02:19:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:13:34.402 02:19:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.402 02:19:24 accel -- common/autotest_common.sh@10 -- # set +x 00:13:34.402 ************************************ 00:13:34.402 START TEST accel_dif_verify 00:13:34.402 ************************************ 00:13:34.402 02:19:24 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:13:34.402 [2024-07-11 02:19:24.552245] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:34.402 [2024-07-11 02:19:24.552320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754652 ] 00:13:34.402 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.402 [2024-07-11 02:19:24.611531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.402 [2024-07-11 02:19:24.702624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.402 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:34.403 02:19:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:35.778 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:35.779 02:19:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:13:35.779 02:19:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:35.779 00:13:35.779 real 0m1.335s 00:13:35.779 user 0m1.207s 00:13:35.779 sys 0m0.131s 00:13:35.779 02:19:25 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.779 02:19:25 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:13:35.779 ************************************ 00:13:35.779 END TEST accel_dif_verify 00:13:35.779 ************************************ 00:13:35.779 02:19:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:35.779 02:19:25 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:13:35.779 02:19:25 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:13:35.779 02:19:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.779 02:19:25 accel -- common/autotest_common.sh@10 -- # set +x 00:13:35.779 ************************************ 00:13:35.779 START TEST accel_dif_generate 00:13:35.779 ************************************ 00:13:35.779 02:19:25 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:13:35.779 02:19:25 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:13:35.779 [2024-07-11 02:19:25.936251] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:35.779 [2024-07-11 02:19:25.936323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754797 ] 00:13:35.779 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.779 [2024-07-11 02:19:25.995287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.779 [2024-07-11 02:19:26.086358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:35.779 02:19:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:37.153 02:19:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:37.154 02:19:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:37.154 00:13:37.154 real 0m1.339s 00:13:37.154 user 0m1.215s 00:13:37.154 sys 0m0.128s 00:13:37.154 02:19:27 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.154 02:19:27 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:13:37.154 ************************************ 00:13:37.154 END TEST accel_dif_generate 00:13:37.154 ************************************ 00:13:37.154 02:19:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:37.154 02:19:27 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:13:37.154 02:19:27 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:13:37.154 02:19:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.154 02:19:27 accel -- common/autotest_common.sh@10 -- # set +x 00:13:37.154 ************************************ 00:13:37.154 START TEST accel_dif_generate_copy 00:13:37.154 ************************************ 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:13:37.154 [2024-07-11 02:19:27.331524] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:37.154 [2024-07-11 02:19:27.331598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754919 ] 00:13:37.154 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.154 [2024-07-11 02:19:27.390585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.154 [2024-07-11 02:19:27.481757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:37.154 02:19:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:38.529 00:13:38.529 real 0m1.336s 00:13:38.529 user 0m1.199s 00:13:38.529 sys 0m0.138s 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.529 02:19:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:13:38.529 ************************************ 00:13:38.529 END TEST accel_dif_generate_copy 00:13:38.529 ************************************ 00:13:38.529 02:19:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:38.529 02:19:28 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:13:38.529 02:19:28 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:38.529 02:19:28 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:13:38.529 02:19:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.529 02:19:28 accel -- common/autotest_common.sh@10 -- # set +x 00:13:38.529 ************************************ 00:13:38.529 START TEST accel_comp 00:13:38.529 ************************************ 00:13:38.529 02:19:28 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:38.529 02:19:28 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:13:38.530 [2024-07-11 02:19:28.715948] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:38.530 [2024-07-11 02:19:28.716016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755045 ] 00:13:38.530 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.530 [2024-07-11 02:19:28.774496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.530 [2024-07-11 02:19:28.866012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:38.530 02:19:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:39.906 02:19:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:39.906 00:13:39.906 real 0m1.339s 00:13:39.906 user 0m1.215s 00:13:39.906 sys 0m0.127s 00:13:39.906 02:19:30 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.906 02:19:30 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:13:39.906 ************************************ 00:13:39.906 END TEST accel_comp 00:13:39.906 ************************************ 00:13:39.906 02:19:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:39.906 02:19:30 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:39.906 02:19:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:13:39.906 02:19:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.906 02:19:30 accel -- common/autotest_common.sh@10 -- # set +x 00:13:39.906 ************************************ 00:13:39.906 START TEST accel_decomp 00:13:39.906 ************************************ 00:13:39.906 02:19:30 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:13:39.906 [2024-07-11 02:19:30.105774] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:39.906 [2024-07-11 02:19:30.105848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755249 ] 00:13:39.906 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.906 [2024-07-11 02:19:30.164797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.906 [2024-07-11 02:19:30.255719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.906 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:39.907 02:19:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:41.282 02:19:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:41.282 00:13:41.282 real 0m1.341s 00:13:41.282 user 0m1.219s 00:13:41.282 sys 0m0.124s 00:13:41.282 02:19:31 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.282 02:19:31 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:13:41.283 ************************************ 00:13:41.283 END TEST accel_decomp 00:13:41.283 ************************************ 00:13:41.283 02:19:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:41.283 02:19:31 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:41.283 02:19:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:41.283 02:19:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.283 02:19:31 accel -- common/autotest_common.sh@10 -- # set +x 00:13:41.283 ************************************ 00:13:41.283 START TEST accel_decomp_full 00:13:41.283 ************************************ 00:13:41.283 02:19:31 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:13:41.283 [2024-07-11 02:19:31.496399] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:41.283 [2024-07-11 02:19:31.496470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755374 ] 00:13:41.283 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.283 [2024-07-11 02:19:31.555212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.283 [2024-07-11 02:19:31.645993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.283 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:41.542 02:19:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:42.477 02:19:32 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:42.477 00:13:42.477 real 0m1.349s 00:13:42.477 user 0m1.220s 00:13:42.477 sys 0m0.131s 00:13:42.477 02:19:32 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:42.477 02:19:32 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:13:42.477 ************************************ 00:13:42.477 END TEST accel_decomp_full 00:13:42.477 ************************************ 00:13:42.477 02:19:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:42.477 02:19:32 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:42.477 02:19:32 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:42.477 02:19:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.477 02:19:32 accel -- common/autotest_common.sh@10 -- # set +x 00:13:42.477 ************************************ 00:13:42.477 START TEST accel_decomp_mcore 00:13:42.477 ************************************ 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:42.477 02:19:32 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:42.477 [2024-07-11 02:19:32.894794] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:42.478 [2024-07-11 02:19:32.894865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755499 ] 00:13:42.736 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.736 [2024-07-11 02:19:32.953765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.736 [2024-07-11 02:19:33.046619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.736 [2024-07-11 02:19:33.046717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.736 [2024-07-11 02:19:33.046720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.736 [2024-07-11 02:19:33.046670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:42.736 02:19:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:44.166 00:13:44.166 real 0m1.346s 00:13:44.166 user 0m4.539s 00:13:44.166 sys 0m0.127s 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.166 02:19:34 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:44.166 ************************************ 00:13:44.166 END TEST accel_decomp_mcore 00:13:44.166 ************************************ 00:13:44.166 02:19:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:44.166 02:19:34 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:44.166 02:19:34 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:44.166 02:19:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.166 02:19:34 accel -- common/autotest_common.sh@10 -- # set +x 00:13:44.166 ************************************ 00:13:44.166 START TEST accel_decomp_full_mcore 00:13:44.166 ************************************ 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:44.166 [2024-07-11 02:19:34.288051] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:44.166 [2024-07-11 02:19:34.288121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755635 ] 00:13:44.166 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.166 [2024-07-11 02:19:34.347290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.166 [2024-07-11 02:19:34.441387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.166 [2024-07-11 02:19:34.441490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.166 [2024-07-11 02:19:34.441493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.166 [2024-07-11 02:19:34.441440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.166 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:44.167 02:19:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:45.543 00:13:45.543 real 0m1.359s 00:13:45.543 user 0m4.589s 00:13:45.543 sys 0m0.132s 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:45.543 02:19:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:45.543 ************************************ 00:13:45.543 END TEST accel_decomp_full_mcore 00:13:45.543 ************************************ 00:13:45.543 02:19:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:45.543 02:19:35 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:45.543 02:19:35 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:45.543 02:19:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.543 02:19:35 accel -- common/autotest_common.sh@10 -- # set +x 00:13:45.543 ************************************ 00:13:45.543 START TEST accel_decomp_mthread 00:13:45.543 ************************************ 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:45.543 [2024-07-11 02:19:35.702748] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:45.543 [2024-07-11 02:19:35.702821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755840 ] 00:13:45.543 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.543 [2024-07-11 02:19:35.761967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.543 [2024-07-11 02:19:35.852685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.543 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:45.544 02:19:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:46.920 00:13:46.920 real 0m1.351s 00:13:46.920 user 0m1.223s 00:13:46.920 sys 0m0.130s 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.920 02:19:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:46.920 ************************************ 00:13:46.920 END TEST accel_decomp_mthread 00:13:46.920 ************************************ 00:13:46.920 02:19:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:46.920 02:19:37 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:46.920 02:19:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:46.920 02:19:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.920 02:19:37 accel -- common/autotest_common.sh@10 -- # set +x 00:13:46.920 ************************************ 00:13:46.920 START TEST accel_decomp_full_mthread 00:13:46.920 ************************************ 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:46.920 [2024-07-11 02:19:37.106680] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:46.920 [2024-07-11 02:19:37.106750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755967 ] 00:13:46.920 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.920 [2024-07-11 02:19:37.164503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.920 [2024-07-11 02:19:37.255523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.920 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:46.921 02:19:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:48.291 00:13:48.291 real 0m1.377s 00:13:48.291 user 0m1.251s 00:13:48.291 sys 0m0.129s 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.291 02:19:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:48.291 ************************************ 00:13:48.291 END TEST accel_decomp_full_mthread 00:13:48.291 ************************************ 00:13:48.291 02:19:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:48.291 02:19:38 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:48.291 02:19:38 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:48.291 02:19:38 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:48.291 02:19:38 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:48.291 02:19:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:48.291 02:19:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.291 02:19:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:48.291 02:19:38 accel -- common/autotest_common.sh@10 -- # set +x 00:13:48.291 02:19:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:48.291 02:19:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:48.291 02:19:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:48.291 02:19:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:48.291 02:19:38 accel -- accel/accel.sh@41 -- # jq -r . 00:13:48.291 ************************************ 00:13:48.291 START TEST accel_dif_functional_tests 00:13:48.291 ************************************ 00:13:48.291 02:19:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:48.291 [2024-07-11 02:19:38.564653] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:48.291 [2024-07-11 02:19:38.564759] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756087 ] 00:13:48.291 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.291 [2024-07-11 02:19:38.625879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:48.550 [2024-07-11 02:19:38.718714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.550 [2024-07-11 02:19:38.718772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.550 [2024-07-11 02:19:38.718808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.550 00:13:48.550 00:13:48.550 CUnit - A unit testing framework for C - Version 2.1-3 00:13:48.550 http://cunit.sourceforge.net/ 00:13:48.550 00:13:48.550 00:13:48.550 Suite: accel_dif 00:13:48.550 Test: verify: DIF generated, GUARD check ...passed 00:13:48.550 Test: verify: DIF generated, APPTAG check ...passed 00:13:48.550 Test: verify: DIF generated, REFTAG check ...passed 00:13:48.550 Test: verify: DIF not generated, GUARD check ...[2024-07-11 02:19:38.799013] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:48.550 passed 00:13:48.550 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 02:19:38.799085] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:48.550 passed 00:13:48.550 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 02:19:38.799123] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:48.550 passed 00:13:48.550 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:48.550 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-11 02:19:38.799193] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:48.550 passed 00:13:48.550 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:48.550 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:48.550 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:48.550 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-11 02:19:38.799346] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:48.550 passed 00:13:48.550 Test: verify copy: DIF generated, GUARD check ...passed 00:13:48.550 Test: verify copy: DIF generated, APPTAG check ...passed 00:13:48.550 Test: verify copy: DIF generated, REFTAG check ...passed 00:13:48.550 Test: verify copy: DIF not generated, GUARD check ...[2024-07-11 02:19:38.799533] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:48.550 passed 00:13:48.550 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-11 02:19:38.799576] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:48.550 passed 00:13:48.550 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-11 02:19:38.799616] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:48.550 passed 00:13:48.550 Test: generate copy: DIF generated, GUARD check ...passed 00:13:48.550 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:48.550 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:48.550 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:48.550 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:48.550 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:48.550 Test: generate copy: iovecs-len validate ...[2024-07-11 02:19:38.799868] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:48.550 passed 00:13:48.550 Test: generate copy: buffer alignment validate ...passed 00:13:48.550 00:13:48.550 Run Summary: Type Total Ran Passed Failed Inactive 00:13:48.550 suites 1 1 n/a 0 0 00:13:48.550 tests 26 26 26 0 0 00:13:48.550 asserts 115 115 115 0 n/a 00:13:48.550 00:13:48.550 Elapsed time = 0.003 seconds 00:13:48.550 00:13:48.550 real 0m0.426s 00:13:48.550 user 0m0.600s 00:13:48.550 sys 0m0.164s 00:13:48.550 02:19:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.550 02:19:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:48.550 ************************************ 00:13:48.550 END TEST accel_dif_functional_tests 00:13:48.550 ************************************ 00:13:48.550 02:19:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:13:48.808 00:13:48.808 real 0m30.133s 00:13:48.808 user 0m33.526s 00:13:48.808 sys 0m4.256s 00:13:48.808 02:19:38 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.808 02:19:38 accel -- common/autotest_common.sh@10 -- # set +x 00:13:48.808 ************************************ 00:13:48.808 END TEST accel 00:13:48.808 ************************************ 00:13:48.808 02:19:38 -- common/autotest_common.sh@1142 -- # return 0 00:13:48.808 02:19:38 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:48.808 02:19:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:48.808 02:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.808 02:19:38 -- common/autotest_common.sh@10 -- # set +x 00:13:48.808 ************************************ 00:13:48.808 START TEST accel_rpc 00:13:48.808 ************************************ 00:13:48.808 02:19:39 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:48.808 * Looking for test storage... 00:13:48.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:13:48.808 02:19:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:48.808 02:19:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1756248 00:13:48.809 02:19:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1756248 00:13:48.809 02:19:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:48.809 02:19:39 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1756248 ']' 00:13:48.809 02:19:39 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.809 02:19:39 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.809 02:19:39 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.809 02:19:39 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.809 02:19:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.809 [2024-07-11 02:19:39.132376] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:48.809 [2024-07-11 02:19:39.132489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756248 ] 00:13:48.809 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.809 [2024-07-11 02:19:39.193182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.066 [2024-07-11 02:19:39.280618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.066 02:19:39 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.066 02:19:39 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:49.066 02:19:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:49.066 02:19:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:49.066 02:19:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:49.066 02:19:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:49.066 02:19:39 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:49.066 02:19:39 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:49.066 02:19:39 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.066 02:19:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.066 ************************************ 00:13:49.066 START TEST accel_assign_opcode 00:13:49.066 ************************************ 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:49.066 [2024-07-11 02:19:39.405404] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:49.066 [2024-07-11 02:19:39.413412] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.066 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:49.323 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.323 02:19:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:49.323 02:19:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:49.323 02:19:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:49.323 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.323 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:49.323 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.323 software 00:13:49.323 00:13:49.323 real 0m0.263s 00:13:49.323 user 0m0.041s 00:13:49.323 sys 0m0.008s 00:13:49.323 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.323 02:19:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:49.323 ************************************ 00:13:49.323 END TEST accel_assign_opcode 00:13:49.323 ************************************ 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:49.323 02:19:39 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1756248 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1756248 ']' 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1756248 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1756248 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1756248' 00:13:49.323 killing process with pid 1756248 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@967 -- # kill 1756248 00:13:49.323 02:19:39 accel_rpc -- common/autotest_common.sh@972 -- # wait 1756248 00:13:49.580 00:13:49.580 real 0m0.950s 00:13:49.580 user 0m0.954s 00:13:49.580 sys 0m0.396s 00:13:49.580 02:19:39 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.580 02:19:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.580 ************************************ 00:13:49.580 END TEST accel_rpc 00:13:49.580 ************************************ 00:13:49.580 02:19:39 -- common/autotest_common.sh@1142 -- # return 0 00:13:49.580 02:19:39 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:13:49.580 02:19:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:49.580 02:19:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.580 02:19:39 -- common/autotest_common.sh@10 -- # set +x 00:13:49.838 ************************************ 00:13:49.838 START TEST app_cmdline 00:13:49.838 ************************************ 00:13:49.838 02:19:40 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:13:49.838 * Looking for test storage... 00:13:49.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:49.838 02:19:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:49.838 02:19:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1756420 00:13:49.838 02:19:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:49.838 02:19:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1756420 00:13:49.838 02:19:40 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1756420 ']' 00:13:49.838 02:19:40 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.838 02:19:40 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.838 02:19:40 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.838 02:19:40 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.838 02:19:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:49.838 [2024-07-11 02:19:40.132745] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:13:49.838 [2024-07-11 02:19:40.132842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756420 ] 00:13:49.838 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.838 [2024-07-11 02:19:40.194340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.095 [2024-07-11 02:19:40.281792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.095 02:19:40 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.095 02:19:40 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:13:50.095 02:19:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:13:50.660 { 00:13:50.660 "version": "SPDK v24.09-pre git sha1 9937c0160", 00:13:50.660 "fields": { 00:13:50.660 "major": 24, 00:13:50.660 "minor": 9, 00:13:50.660 "patch": 0, 00:13:50.660 "suffix": "-pre", 00:13:50.660 "commit": "9937c0160" 00:13:50.660 } 00:13:50.660 } 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:50.660 02:19:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:50.660 02:19:40 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:50.918 request: 00:13:50.918 { 00:13:50.918 "method": "env_dpdk_get_mem_stats", 00:13:50.918 "req_id": 1 00:13:50.918 } 00:13:50.918 Got JSON-RPC error response 00:13:50.918 response: 00:13:50.918 { 00:13:50.918 "code": -32601, 00:13:50.918 "message": "Method not found" 00:13:50.918 } 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:50.918 02:19:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1756420 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1756420 ']' 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1756420 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1756420 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1756420' 00:13:50.918 killing process with pid 1756420 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@967 -- # kill 1756420 00:13:50.918 02:19:41 app_cmdline -- common/autotest_common.sh@972 -- # wait 1756420 00:13:51.177 00:13:51.177 real 0m1.407s 00:13:51.177 user 0m1.886s 00:13:51.177 sys 0m0.440s 00:13:51.177 02:19:41 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.177 02:19:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:51.177 ************************************ 00:13:51.177 END TEST app_cmdline 00:13:51.177 ************************************ 00:13:51.177 02:19:41 -- common/autotest_common.sh@1142 -- # return 0 00:13:51.177 02:19:41 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:13:51.177 02:19:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:51.177 02:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.177 02:19:41 -- common/autotest_common.sh@10 -- # set +x 00:13:51.177 ************************************ 00:13:51.177 START TEST version 00:13:51.177 ************************************ 00:13:51.177 02:19:41 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:13:51.177 * Looking for test storage... 00:13:51.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:51.177 02:19:41 version -- app/version.sh@17 -- # get_header_version major 00:13:51.177 02:19:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:51.177 02:19:41 version -- app/version.sh@14 -- # cut -f2 00:13:51.177 02:19:41 version -- app/version.sh@14 -- # tr -d '"' 00:13:51.177 02:19:41 version -- app/version.sh@17 -- # major=24 00:13:51.177 02:19:41 version -- app/version.sh@18 -- # get_header_version minor 00:13:51.177 02:19:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:51.177 02:19:41 version -- app/version.sh@14 -- # cut -f2 00:13:51.177 02:19:41 version -- app/version.sh@14 -- # tr -d '"' 00:13:51.177 02:19:41 version -- app/version.sh@18 -- # minor=9 00:13:51.177 02:19:41 version -- app/version.sh@19 -- # get_header_version patch 00:13:51.177 02:19:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:51.177 02:19:41 version -- app/version.sh@14 -- # cut -f2 00:13:51.177 02:19:41 version -- app/version.sh@14 -- # tr -d '"' 00:13:51.177 02:19:41 version -- app/version.sh@19 -- # patch=0 00:13:51.177 02:19:41 version -- app/version.sh@20 -- # get_header_version suffix 00:13:51.177 02:19:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:51.177 02:19:41 version -- app/version.sh@14 -- # cut -f2 00:13:51.177 02:19:41 version -- app/version.sh@14 -- # tr -d '"' 00:13:51.177 02:19:41 version -- app/version.sh@20 -- # suffix=-pre 00:13:51.177 02:19:41 version -- app/version.sh@22 -- # version=24.9 00:13:51.177 02:19:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:51.177 02:19:41 version -- app/version.sh@28 -- # version=24.9rc0 00:13:51.177 02:19:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:51.177 02:19:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:51.177 02:19:41 version -- app/version.sh@30 -- # py_version=24.9rc0 00:13:51.177 02:19:41 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:13:51.177 00:13:51.177 real 0m0.112s 00:13:51.177 user 0m0.060s 00:13:51.177 sys 0m0.075s 00:13:51.177 02:19:41 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.177 02:19:41 version -- common/autotest_common.sh@10 -- # set +x 00:13:51.177 ************************************ 00:13:51.177 END TEST version 00:13:51.177 ************************************ 00:13:51.436 02:19:41 -- common/autotest_common.sh@1142 -- # return 0 00:13:51.436 02:19:41 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:13:51.436 02:19:41 -- spdk/autotest.sh@198 -- # uname -s 00:13:51.436 02:19:41 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:13:51.436 02:19:41 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:13:51.436 02:19:41 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:13:51.436 02:19:41 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:13:51.436 02:19:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:13:51.436 02:19:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:13:51.436 02:19:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:51.436 02:19:41 -- common/autotest_common.sh@10 -- # set +x 00:13:51.436 02:19:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:13:51.436 02:19:41 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:13:51.436 02:19:41 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:13:51.436 02:19:41 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:13:51.436 02:19:41 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:13:51.436 02:19:41 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:13:51.436 02:19:41 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:51.436 02:19:41 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:51.436 02:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.436 02:19:41 -- common/autotest_common.sh@10 -- # set +x 00:13:51.436 ************************************ 00:13:51.436 START TEST nvmf_tcp 00:13:51.436 ************************************ 00:13:51.436 02:19:41 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:51.436 * Looking for test storage... 00:13:51.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.436 02:19:41 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.436 02:19:41 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.436 02:19:41 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.436 02:19:41 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.436 02:19:41 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.436 02:19:41 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.436 02:19:41 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.436 02:19:41 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:13:51.437 02:19:41 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:51.437 02:19:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.437 02:19:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:13:51.437 02:19:41 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:51.437 02:19:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:51.437 02:19:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.437 02:19:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:51.437 ************************************ 00:13:51.437 START TEST nvmf_example 00:13:51.437 ************************************ 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:51.437 * Looking for test storage... 00:13:51.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.437 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.695 02:19:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.695 02:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.074 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:53.075 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:53.075 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:53.075 Found net devices under 0000:08:00.0: cvl_0_0 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:53.075 Found net devices under 0000:08:00.1: cvl_0_1 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.075 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:13:53.334 00:13:53.334 --- 10.0.0.2 ping statistics --- 00:13:53.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.334 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:13:53.334 00:13:53.334 --- 10.0.0.1 ping statistics --- 00:13:53.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.334 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1757913 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1757913 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1757913 ']' 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.334 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:53.334 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.592 02:19:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:53.592 02:19:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.592 02:19:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:53.592 02:19:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:53.592 02:19:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.592 02:19:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:53.592 02:19:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.592 02:19:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.592 02:19:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.592 02:19:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:53.851 02:19:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.851 02:19:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:53.851 02:19:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:53.851 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.044 Initializing NVMe Controllers 00:14:06.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:06.044 Initialization complete. Launching workers. 00:14:06.044 ======================================================== 00:14:06.044 Latency(us) 00:14:06.044 Device Information : IOPS MiB/s Average min max 00:14:06.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13732.00 53.64 4660.25 716.68 15133.95 00:14:06.044 ======================================================== 00:14:06.044 Total : 13732.00 53.64 4660.25 716.68 15133.95 00:14:06.044 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.044 rmmod nvme_tcp 00:14:06.044 rmmod nvme_fabrics 00:14:06.044 rmmod nvme_keyring 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1757913 ']' 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1757913 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1757913 ']' 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1757913 00:14:06.044 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1757913 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1757913' 00:14:06.045 killing process with pid 1757913 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1757913 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1757913 00:14:06.045 nvmf threads initialize successfully 00:14:06.045 bdev subsystem init successfully 00:14:06.045 created a nvmf target service 00:14:06.045 create targets's poll groups done 00:14:06.045 all subsystems of target started 00:14:06.045 nvmf target is running 00:14:06.045 all subsystems of target stopped 00:14:06.045 destroy targets's poll groups done 00:14:06.045 destroyed the nvmf target service 00:14:06.045 bdev subsystem finish successfully 00:14:06.045 nvmf threads destroy successfully 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.045 02:19:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.304 02:19:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:06.304 02:19:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:06.304 02:19:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.304 02:19:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:06.304 00:14:06.304 real 0m14.837s 00:14:06.304 user 0m42.198s 00:14:06.304 sys 0m2.882s 00:14:06.304 02:19:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.304 02:19:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:06.304 ************************************ 00:14:06.304 END TEST nvmf_example 00:14:06.304 ************************************ 00:14:06.304 02:19:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:06.304 02:19:56 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:06.304 02:19:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:06.304 02:19:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.304 02:19:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.304 ************************************ 00:14:06.304 START TEST nvmf_filesystem 00:14:06.304 ************************************ 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:06.304 * Looking for test storage... 00:14:06.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:14:06.304 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:06.305 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:06.566 #define SPDK_CONFIG_H 00:14:06.566 #define SPDK_CONFIG_APPS 1 00:14:06.566 #define SPDK_CONFIG_ARCH native 00:14:06.566 #undef SPDK_CONFIG_ASAN 00:14:06.566 #undef SPDK_CONFIG_AVAHI 00:14:06.566 #undef SPDK_CONFIG_CET 00:14:06.566 #define SPDK_CONFIG_COVERAGE 1 00:14:06.566 #define SPDK_CONFIG_CROSS_PREFIX 00:14:06.566 #undef SPDK_CONFIG_CRYPTO 00:14:06.566 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:06.566 #undef SPDK_CONFIG_CUSTOMOCF 00:14:06.566 #undef SPDK_CONFIG_DAOS 00:14:06.566 #define SPDK_CONFIG_DAOS_DIR 00:14:06.566 #define SPDK_CONFIG_DEBUG 1 00:14:06.566 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:06.566 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:14:06.566 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:14:06.566 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:06.566 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:06.566 #undef SPDK_CONFIG_DPDK_UADK 00:14:06.566 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:06.566 #define SPDK_CONFIG_EXAMPLES 1 00:14:06.566 #undef SPDK_CONFIG_FC 00:14:06.566 #define SPDK_CONFIG_FC_PATH 00:14:06.566 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:06.566 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:06.566 #undef SPDK_CONFIG_FUSE 00:14:06.566 #undef SPDK_CONFIG_FUZZER 00:14:06.566 #define SPDK_CONFIG_FUZZER_LIB 00:14:06.566 #undef SPDK_CONFIG_GOLANG 00:14:06.566 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:06.566 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:06.566 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:06.566 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:06.566 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:06.566 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:06.566 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:06.566 #define SPDK_CONFIG_IDXD 1 00:14:06.566 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:06.566 #undef SPDK_CONFIG_IPSEC_MB 00:14:06.566 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:06.566 #define SPDK_CONFIG_ISAL 1 00:14:06.566 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:06.566 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:06.566 #define SPDK_CONFIG_LIBDIR 00:14:06.566 #undef SPDK_CONFIG_LTO 00:14:06.566 #define SPDK_CONFIG_MAX_LCORES 128 00:14:06.566 #define SPDK_CONFIG_NVME_CUSE 1 00:14:06.566 #undef SPDK_CONFIG_OCF 00:14:06.566 #define SPDK_CONFIG_OCF_PATH 00:14:06.566 #define SPDK_CONFIG_OPENSSL_PATH 00:14:06.566 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:06.566 #define SPDK_CONFIG_PGO_DIR 00:14:06.566 #undef SPDK_CONFIG_PGO_USE 00:14:06.566 #define SPDK_CONFIG_PREFIX /usr/local 00:14:06.566 #undef SPDK_CONFIG_RAID5F 00:14:06.566 #undef SPDK_CONFIG_RBD 00:14:06.566 #define SPDK_CONFIG_RDMA 1 00:14:06.566 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:06.566 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:06.566 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:06.566 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:06.566 #define SPDK_CONFIG_SHARED 1 00:14:06.566 #undef SPDK_CONFIG_SMA 00:14:06.566 #define SPDK_CONFIG_TESTS 1 00:14:06.566 #undef SPDK_CONFIG_TSAN 00:14:06.566 #define SPDK_CONFIG_UBLK 1 00:14:06.566 #define SPDK_CONFIG_UBSAN 1 00:14:06.566 #undef SPDK_CONFIG_UNIT_TESTS 00:14:06.566 #undef SPDK_CONFIG_URING 00:14:06.566 #define SPDK_CONFIG_URING_PATH 00:14:06.566 #undef SPDK_CONFIG_URING_ZNS 00:14:06.566 #undef SPDK_CONFIG_USDT 00:14:06.566 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:06.566 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:06.566 #define SPDK_CONFIG_VFIO_USER 1 00:14:06.566 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:06.566 #define SPDK_CONFIG_VHOST 1 00:14:06.566 #define SPDK_CONFIG_VIRTIO 1 00:14:06.566 #undef SPDK_CONFIG_VTUNE 00:14:06.566 #define SPDK_CONFIG_VTUNE_DIR 00:14:06.566 #define SPDK_CONFIG_WERROR 1 00:14:06.566 #define SPDK_CONFIG_WPDK_DIR 00:14:06.566 #undef SPDK_CONFIG_XNVME 00:14:06.566 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:06.566 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:14:06.567 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j32 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1759224 ]] 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1759224 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:14:06.568 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.1MToEq 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.1MToEq/tests/target /tmp/spdk.1MToEq 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1957711872 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3326717952 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=41470349312 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=53546156032 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12075806720 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=26768367616 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=26773078016 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=10700750848 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=10709233664 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8482816 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=26772492288 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=26773078016 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=585728 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=5354610688 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5354614784 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:14:06.569 * Looking for test storage... 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=41470349312 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=14290399232 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.569 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.570 02:19:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:08.475 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:08.475 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:08.475 Found net devices under 0000:08:00.0: cvl_0_0 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:08.475 Found net devices under 0000:08:00.1: cvl_0_1 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:08.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:14:08.475 00:14:08.475 --- 10.0.0.2 ping statistics --- 00:14:08.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.475 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:14:08.475 00:14:08.475 --- 10.0.0.1 ping statistics --- 00:14:08.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.475 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.475 02:19:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:08.475 ************************************ 00:14:08.475 START TEST nvmf_filesystem_no_in_capsule 00:14:08.475 ************************************ 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1760390 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1760390 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1760390 ']' 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.476 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.476 [2024-07-11 02:19:58.715557] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:08.476 [2024-07-11 02:19:58.715644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.476 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.476 [2024-07-11 02:19:58.781373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.476 [2024-07-11 02:19:58.871779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.476 [2024-07-11 02:19:58.871837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.476 [2024-07-11 02:19:58.871854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.476 [2024-07-11 02:19:58.871873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.476 [2024-07-11 02:19:58.871886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.476 [2024-07-11 02:19:58.871975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.476 [2024-07-11 02:19:58.872029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.476 [2024-07-11 02:19:58.872079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.476 [2024-07-11 02:19:58.872081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.734 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.734 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:14:08.734 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.734 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.734 02:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.734 [2024-07-11 02:19:59.007173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.734 Malloc1 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.734 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.992 [2024-07-11 02:19:59.173680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:08.992 { 00:14:08.992 "name": "Malloc1", 00:14:08.992 "aliases": [ 00:14:08.992 "33dee57c-1f5b-4849-af9d-f2b57c073983" 00:14:08.992 ], 00:14:08.992 "product_name": "Malloc disk", 00:14:08.992 "block_size": 512, 00:14:08.992 "num_blocks": 1048576, 00:14:08.992 "uuid": "33dee57c-1f5b-4849-af9d-f2b57c073983", 00:14:08.992 "assigned_rate_limits": { 00:14:08.992 "rw_ios_per_sec": 0, 00:14:08.992 "rw_mbytes_per_sec": 0, 00:14:08.992 "r_mbytes_per_sec": 0, 00:14:08.992 "w_mbytes_per_sec": 0 00:14:08.992 }, 00:14:08.992 "claimed": true, 00:14:08.992 "claim_type": "exclusive_write", 00:14:08.992 "zoned": false, 00:14:08.992 "supported_io_types": { 00:14:08.992 "read": true, 00:14:08.992 "write": true, 00:14:08.992 "unmap": true, 00:14:08.992 "flush": true, 00:14:08.992 "reset": true, 00:14:08.992 "nvme_admin": false, 00:14:08.992 "nvme_io": false, 00:14:08.992 "nvme_io_md": false, 00:14:08.992 "write_zeroes": true, 00:14:08.992 "zcopy": true, 00:14:08.992 "get_zone_info": false, 00:14:08.992 "zone_management": false, 00:14:08.992 "zone_append": false, 00:14:08.992 "compare": false, 00:14:08.992 "compare_and_write": false, 00:14:08.992 "abort": true, 00:14:08.992 "seek_hole": false, 00:14:08.992 "seek_data": false, 00:14:08.992 "copy": true, 00:14:08.992 "nvme_iov_md": false 00:14:08.992 }, 00:14:08.992 "memory_domains": [ 00:14:08.992 { 00:14:08.992 "dma_device_id": "system", 00:14:08.992 "dma_device_type": 1 00:14:08.992 }, 00:14:08.992 { 00:14:08.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.992 "dma_device_type": 2 00:14:08.992 } 00:14:08.992 ], 00:14:08.992 "driver_specific": {} 00:14:08.992 } 00:14:08.992 ]' 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:08.992 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.559 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:09.559 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:09.559 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.559 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:09.559 02:19:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:11.457 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:11.714 02:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:12.281 02:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.245 ************************************ 00:14:13.245 START TEST filesystem_ext4 00:14:13.245 ************************************ 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:14:13.245 02:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:13.245 mke2fs 1.46.5 (30-Dec-2021) 00:14:13.245 Discarding device blocks: 0/522240 done 00:14:13.245 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:13.245 Filesystem UUID: 9e0864f7-a3d5-4375-8e93-2fd060fea86f 00:14:13.245 Superblock backups stored on blocks: 00:14:13.245 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:13.245 00:14:13.245 Allocating group tables: 0/64 done 00:14:13.245 Writing inode tables: 0/64 done 00:14:14.617 Creating journal (8192 blocks): done 00:14:14.617 Writing superblocks and filesystem accounting information: 0/64 done 00:14:14.617 00:14:14.617 02:20:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:14:14.617 02:20:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1760390 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:14.876 00:14:14.876 real 0m1.723s 00:14:14.876 user 0m0.019s 00:14:14.876 sys 0m0.059s 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:14.876 ************************************ 00:14:14.876 END TEST filesystem_ext4 00:14:14.876 ************************************ 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:14.876 ************************************ 00:14:14.876 START TEST filesystem_btrfs 00:14:14.876 ************************************ 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:14.876 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:14.877 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:14.877 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:14:14.877 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:14.877 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:14:14.877 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:14:14.877 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:14:14.877 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:14:14.877 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:15.135 btrfs-progs v6.6.2 00:14:15.135 See https://btrfs.readthedocs.io for more information. 00:14:15.135 00:14:15.135 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:15.135 NOTE: several default settings have changed in version 5.15, please make sure 00:14:15.135 this does not affect your deployments: 00:14:15.135 - DUP for metadata (-m dup) 00:14:15.135 - enabled no-holes (-O no-holes) 00:14:15.135 - enabled free-space-tree (-R free-space-tree) 00:14:15.135 00:14:15.135 Label: (null) 00:14:15.135 UUID: 4436d1d2-927d-4200-8ae7-e8ffb8c55572 00:14:15.135 Node size: 16384 00:14:15.135 Sector size: 4096 00:14:15.135 Filesystem size: 510.00MiB 00:14:15.135 Block group profiles: 00:14:15.135 Data: single 8.00MiB 00:14:15.135 Metadata: DUP 32.00MiB 00:14:15.135 System: DUP 8.00MiB 00:14:15.135 SSD detected: yes 00:14:15.135 Zoned device: no 00:14:15.135 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:15.135 Runtime features: free-space-tree 00:14:15.135 Checksum: crc32c 00:14:15.135 Number of devices: 1 00:14:15.135 Devices: 00:14:15.135 ID SIZE PATH 00:14:15.135 1 510.00MiB /dev/nvme0n1p1 00:14:15.135 00:14:15.135 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:14:15.135 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1760390 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:15.702 00:14:15.702 real 0m0.694s 00:14:15.702 user 0m0.022s 00:14:15.702 sys 0m0.110s 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:15.702 ************************************ 00:14:15.702 END TEST filesystem_btrfs 00:14:15.702 ************************************ 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:15.702 ************************************ 00:14:15.702 START TEST filesystem_xfs 00:14:15.702 ************************************ 00:14:15.702 02:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:14:15.702 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:15.702 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:15.702 = sectsz=512 attr=2, projid32bit=1 00:14:15.702 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:15.702 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:15.702 data = bsize=4096 blocks=130560, imaxpct=25 00:14:15.702 = sunit=0 swidth=0 blks 00:14:15.702 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:15.702 log =internal log bsize=4096 blocks=16384, version=2 00:14:15.702 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:15.702 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:16.641 Discarding blocks...Done. 00:14:16.641 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:14:16.641 02:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:19.172 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:19.172 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1760390 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:19.430 00:14:19.430 real 0m3.631s 00:14:19.430 user 0m0.021s 00:14:19.430 sys 0m0.060s 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:19.430 ************************************ 00:14:19.430 END TEST filesystem_xfs 00:14:19.430 ************************************ 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:19.430 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:19.687 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:19.687 02:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1760390 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1760390 ']' 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1760390 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1760390 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1760390' 00:14:19.687 killing process with pid 1760390 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1760390 00:14:19.687 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1760390 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:20.253 00:14:20.253 real 0m11.722s 00:14:20.253 user 0m45.056s 00:14:20.253 sys 0m1.734s 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.253 ************************************ 00:14:20.253 END TEST nvmf_filesystem_no_in_capsule 00:14:20.253 ************************************ 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:20.253 ************************************ 00:14:20.253 START TEST nvmf_filesystem_in_capsule 00:14:20.253 ************************************ 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1761718 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1761718 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1761718 ']' 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.253 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.253 [2024-07-11 02:20:10.496403] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:20.253 [2024-07-11 02:20:10.496500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.253 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.253 [2024-07-11 02:20:10.562641] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.253 [2024-07-11 02:20:10.652934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.253 [2024-07-11 02:20:10.652995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.253 [2024-07-11 02:20:10.653012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.253 [2024-07-11 02:20:10.653026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.253 [2024-07-11 02:20:10.653038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.253 [2024-07-11 02:20:10.653116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.253 [2024-07-11 02:20:10.653168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.253 [2024-07-11 02:20:10.653214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.253 [2024-07-11 02:20:10.653217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.511 [2024-07-11 02:20:10.802266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.511 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.769 Malloc1 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.769 [2024-07-11 02:20:10.968365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:20.769 { 00:14:20.769 "name": "Malloc1", 00:14:20.769 "aliases": [ 00:14:20.769 "1b08d231-a7e1-40b3-8ffb-b851e3073901" 00:14:20.769 ], 00:14:20.769 "product_name": "Malloc disk", 00:14:20.769 "block_size": 512, 00:14:20.769 "num_blocks": 1048576, 00:14:20.769 "uuid": "1b08d231-a7e1-40b3-8ffb-b851e3073901", 00:14:20.769 "assigned_rate_limits": { 00:14:20.769 "rw_ios_per_sec": 0, 00:14:20.769 "rw_mbytes_per_sec": 0, 00:14:20.769 "r_mbytes_per_sec": 0, 00:14:20.769 "w_mbytes_per_sec": 0 00:14:20.769 }, 00:14:20.769 "claimed": true, 00:14:20.769 "claim_type": "exclusive_write", 00:14:20.769 "zoned": false, 00:14:20.769 "supported_io_types": { 00:14:20.769 "read": true, 00:14:20.769 "write": true, 00:14:20.769 "unmap": true, 00:14:20.769 "flush": true, 00:14:20.769 "reset": true, 00:14:20.769 "nvme_admin": false, 00:14:20.769 "nvme_io": false, 00:14:20.769 "nvme_io_md": false, 00:14:20.769 "write_zeroes": true, 00:14:20.769 "zcopy": true, 00:14:20.769 "get_zone_info": false, 00:14:20.769 "zone_management": false, 00:14:20.769 "zone_append": false, 00:14:20.769 "compare": false, 00:14:20.769 "compare_and_write": false, 00:14:20.769 "abort": true, 00:14:20.769 "seek_hole": false, 00:14:20.769 "seek_data": false, 00:14:20.769 "copy": true, 00:14:20.769 "nvme_iov_md": false 00:14:20.769 }, 00:14:20.769 "memory_domains": [ 00:14:20.769 { 00:14:20.769 "dma_device_id": "system", 00:14:20.769 "dma_device_type": 1 00:14:20.769 }, 00:14:20.769 { 00:14:20.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.769 "dma_device_type": 2 00:14:20.769 } 00:14:20.769 ], 00:14:20.769 "driver_specific": {} 00:14:20.769 } 00:14:20.769 ]' 00:14:20.769 02:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:20.769 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:20.769 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:20.769 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:20.769 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:20.769 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:20.769 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:20.769 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.334 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.334 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:21.334 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.334 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:21.334 02:20:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:23.232 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:23.489 02:20:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:24.422 02:20:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:25.796 ************************************ 00:14:25.796 START TEST filesystem_in_capsule_ext4 00:14:25.796 ************************************ 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:14:25.796 02:20:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:25.796 mke2fs 1.46.5 (30-Dec-2021) 00:14:25.796 Discarding device blocks: 0/522240 done 00:14:25.796 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:25.796 Filesystem UUID: b021dbeb-249f-465b-aca2-341e3057029b 00:14:25.796 Superblock backups stored on blocks: 00:14:25.796 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:25.796 00:14:25.796 Allocating group tables: 0/64 done 00:14:25.796 Writing inode tables: 0/64 done 00:14:28.322 Creating journal (8192 blocks): done 00:14:28.322 Writing superblocks and filesystem accounting information: 0/64 done 00:14:28.322 00:14:28.322 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:14:28.322 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1761718 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:28.323 00:14:28.323 real 0m2.645s 00:14:28.323 user 0m0.013s 00:14:28.323 sys 0m0.062s 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:28.323 ************************************ 00:14:28.323 END TEST filesystem_in_capsule_ext4 00:14:28.323 ************************************ 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:28.323 ************************************ 00:14:28.323 START TEST filesystem_in_capsule_btrfs 00:14:28.323 ************************************ 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:28.323 btrfs-progs v6.6.2 00:14:28.323 See https://btrfs.readthedocs.io for more information. 00:14:28.323 00:14:28.323 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:28.323 NOTE: several default settings have changed in version 5.15, please make sure 00:14:28.323 this does not affect your deployments: 00:14:28.323 - DUP for metadata (-m dup) 00:14:28.323 - enabled no-holes (-O no-holes) 00:14:28.323 - enabled free-space-tree (-R free-space-tree) 00:14:28.323 00:14:28.323 Label: (null) 00:14:28.323 UUID: 0d008da8-22a9-4c01-bbd0-515ebd0ca3fd 00:14:28.323 Node size: 16384 00:14:28.323 Sector size: 4096 00:14:28.323 Filesystem size: 510.00MiB 00:14:28.323 Block group profiles: 00:14:28.323 Data: single 8.00MiB 00:14:28.323 Metadata: DUP 32.00MiB 00:14:28.323 System: DUP 8.00MiB 00:14:28.323 SSD detected: yes 00:14:28.323 Zoned device: no 00:14:28.323 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:28.323 Runtime features: free-space-tree 00:14:28.323 Checksum: crc32c 00:14:28.323 Number of devices: 1 00:14:28.323 Devices: 00:14:28.323 ID SIZE PATH 00:14:28.323 1 510.00MiB /dev/nvme0n1p1 00:14:28.323 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:14:28.323 02:20:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1761718 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:29.256 00:14:29.256 real 0m0.895s 00:14:29.256 user 0m0.024s 00:14:29.256 sys 0m0.107s 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:29.256 ************************************ 00:14:29.256 END TEST filesystem_in_capsule_btrfs 00:14:29.256 ************************************ 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:29.256 ************************************ 00:14:29.256 START TEST filesystem_in_capsule_xfs 00:14:29.256 ************************************ 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:14:29.256 02:20:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:29.256 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:29.256 = sectsz=512 attr=2, projid32bit=1 00:14:29.256 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:29.256 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:29.256 data = bsize=4096 blocks=130560, imaxpct=25 00:14:29.256 = sunit=0 swidth=0 blks 00:14:29.257 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:29.257 log =internal log bsize=4096 blocks=16384, version=2 00:14:29.257 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:29.257 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:30.189 Discarding blocks...Done. 00:14:30.189 02:20:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:14:30.189 02:20:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1761718 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:32.717 00:14:32.717 real 0m3.242s 00:14:32.717 user 0m0.017s 00:14:32.717 sys 0m0.058s 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:32.717 ************************************ 00:14:32.717 END TEST filesystem_in_capsule_xfs 00:14:32.717 ************************************ 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:32.717 02:20:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1761718 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1761718 ']' 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1761718 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1761718 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1761718' 00:14:32.717 killing process with pid 1761718 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1761718 00:14:32.717 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1761718 00:14:32.977 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:32.977 00:14:32.977 real 0m12.923s 00:14:32.977 user 0m49.630s 00:14:32.977 sys 0m1.948s 00:14:32.977 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.977 02:20:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:32.977 ************************************ 00:14:32.977 END TEST nvmf_filesystem_in_capsule 00:14:32.977 ************************************ 00:14:32.977 02:20:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:14:32.977 02:20:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:32.977 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.977 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:33.243 rmmod nvme_tcp 00:14:33.243 rmmod nvme_fabrics 00:14:33.243 rmmod nvme_keyring 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.243 02:20:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.153 02:20:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.153 00:14:35.153 real 0m28.849s 00:14:35.153 user 1m35.469s 00:14:35.153 sys 0m5.088s 00:14:35.153 02:20:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.153 02:20:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.153 ************************************ 00:14:35.153 END TEST nvmf_filesystem 00:14:35.153 ************************************ 00:14:35.153 02:20:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:35.153 02:20:25 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:35.153 02:20:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.153 02:20:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.153 02:20:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.153 ************************************ 00:14:35.153 START TEST nvmf_target_discovery 00:14:35.153 ************************************ 00:14:35.153 02:20:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:35.447 * Looking for test storage... 00:14:35.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.447 02:20:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.448 02:20:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:36.824 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:36.824 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:36.824 Found net devices under 0000:08:00.0: cvl_0_0 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.824 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:36.825 Found net devices under 0000:08:00.1: cvl_0_1 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.825 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:37.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:14:37.083 00:14:37.083 --- 10.0.0.2 ping statistics --- 00:14:37.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.083 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:14:37.083 00:14:37.083 --- 10.0.0.1 ping statistics --- 00:14:37.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.083 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1764535 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1764535 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1764535 ']' 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.083 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.083 [2024-07-11 02:20:27.407833] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:37.083 [2024-07-11 02:20:27.407931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.083 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.083 [2024-07-11 02:20:27.472311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:37.341 [2024-07-11 02:20:27.559709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.341 [2024-07-11 02:20:27.559775] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.341 [2024-07-11 02:20:27.559792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.341 [2024-07-11 02:20:27.559806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.341 [2024-07-11 02:20:27.559818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.341 [2024-07-11 02:20:27.559907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.341 [2024-07-11 02:20:27.559962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.342 [2024-07-11 02:20:27.560026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.342 [2024-07-11 02:20:27.560028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.342 [2024-07-11 02:20:27.705323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.342 Null1 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.342 [2024-07-11 02:20:27.745606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.342 Null2 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.342 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.600 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.600 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:37.600 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 Null3 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 Null4 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.601 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:14:37.601 00:14:37.601 Discovery Log Number of Records 6, Generation counter 6 00:14:37.601 =====Discovery Log Entry 0====== 00:14:37.601 trtype: tcp 00:14:37.601 adrfam: ipv4 00:14:37.601 subtype: current discovery subsystem 00:14:37.601 treq: not required 00:14:37.601 portid: 0 00:14:37.601 trsvcid: 4420 00:14:37.601 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:37.601 traddr: 10.0.0.2 00:14:37.601 eflags: explicit discovery connections, duplicate discovery information 00:14:37.601 sectype: none 00:14:37.601 =====Discovery Log Entry 1====== 00:14:37.601 trtype: tcp 00:14:37.601 adrfam: ipv4 00:14:37.601 subtype: nvme subsystem 00:14:37.601 treq: not required 00:14:37.601 portid: 0 00:14:37.601 trsvcid: 4420 00:14:37.601 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:37.601 traddr: 10.0.0.2 00:14:37.601 eflags: none 00:14:37.601 sectype: none 00:14:37.601 =====Discovery Log Entry 2====== 00:14:37.601 trtype: tcp 00:14:37.601 adrfam: ipv4 00:14:37.601 subtype: nvme subsystem 00:14:37.601 treq: not required 00:14:37.601 portid: 0 00:14:37.601 trsvcid: 4420 00:14:37.601 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:37.601 traddr: 10.0.0.2 00:14:37.601 eflags: none 00:14:37.601 sectype: none 00:14:37.601 =====Discovery Log Entry 3====== 00:14:37.601 trtype: tcp 00:14:37.601 adrfam: ipv4 00:14:37.601 subtype: nvme subsystem 00:14:37.601 treq: not required 00:14:37.601 portid: 0 00:14:37.601 trsvcid: 4420 00:14:37.601 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:37.601 traddr: 10.0.0.2 00:14:37.601 eflags: none 00:14:37.601 sectype: none 00:14:37.601 =====Discovery Log Entry 4====== 00:14:37.601 trtype: tcp 00:14:37.601 adrfam: ipv4 00:14:37.601 subtype: nvme subsystem 00:14:37.601 treq: not required 00:14:37.601 portid: 0 00:14:37.601 trsvcid: 4420 00:14:37.601 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:37.602 traddr: 10.0.0.2 00:14:37.602 eflags: none 00:14:37.602 sectype: none 00:14:37.602 =====Discovery Log Entry 5====== 00:14:37.602 trtype: tcp 00:14:37.602 adrfam: ipv4 00:14:37.602 subtype: discovery subsystem referral 00:14:37.602 treq: not required 00:14:37.602 portid: 0 00:14:37.602 trsvcid: 4430 00:14:37.602 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:37.602 traddr: 10.0.0.2 00:14:37.602 eflags: none 00:14:37.602 sectype: none 00:14:37.602 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:37.602 Perform nvmf subsystem discovery via RPC 00:14:37.602 02:20:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:37.602 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.602 02:20:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.602 [ 00:14:37.602 { 00:14:37.602 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:37.602 "subtype": "Discovery", 00:14:37.602 "listen_addresses": [ 00:14:37.602 { 00:14:37.602 "trtype": "TCP", 00:14:37.602 "adrfam": "IPv4", 00:14:37.602 "traddr": "10.0.0.2", 00:14:37.602 "trsvcid": "4420" 00:14:37.602 } 00:14:37.602 ], 00:14:37.602 "allow_any_host": true, 00:14:37.602 "hosts": [] 00:14:37.602 }, 00:14:37.602 { 00:14:37.602 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.602 "subtype": "NVMe", 00:14:37.602 "listen_addresses": [ 00:14:37.602 { 00:14:37.602 "trtype": "TCP", 00:14:37.602 "adrfam": "IPv4", 00:14:37.602 "traddr": "10.0.0.2", 00:14:37.602 "trsvcid": "4420" 00:14:37.602 } 00:14:37.602 ], 00:14:37.602 "allow_any_host": true, 00:14:37.602 "hosts": [], 00:14:37.602 "serial_number": "SPDK00000000000001", 00:14:37.602 "model_number": "SPDK bdev Controller", 00:14:37.602 "max_namespaces": 32, 00:14:37.602 "min_cntlid": 1, 00:14:37.602 "max_cntlid": 65519, 00:14:37.602 "namespaces": [ 00:14:37.602 { 00:14:37.602 "nsid": 1, 00:14:37.602 "bdev_name": "Null1", 00:14:37.602 "name": "Null1", 00:14:37.602 "nguid": "C59FE093385B42F8955F3872FE07C5D7", 00:14:37.602 "uuid": "c59fe093-385b-42f8-955f-3872fe07c5d7" 00:14:37.602 } 00:14:37.602 ] 00:14:37.602 }, 00:14:37.602 { 00:14:37.602 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:37.602 "subtype": "NVMe", 00:14:37.602 "listen_addresses": [ 00:14:37.602 { 00:14:37.602 "trtype": "TCP", 00:14:37.602 "adrfam": "IPv4", 00:14:37.602 "traddr": "10.0.0.2", 00:14:37.602 "trsvcid": "4420" 00:14:37.602 } 00:14:37.602 ], 00:14:37.602 "allow_any_host": true, 00:14:37.602 "hosts": [], 00:14:37.602 "serial_number": "SPDK00000000000002", 00:14:37.602 "model_number": "SPDK bdev Controller", 00:14:37.602 "max_namespaces": 32, 00:14:37.602 "min_cntlid": 1, 00:14:37.602 "max_cntlid": 65519, 00:14:37.602 "namespaces": [ 00:14:37.602 { 00:14:37.602 "nsid": 1, 00:14:37.602 "bdev_name": "Null2", 00:14:37.602 "name": "Null2", 00:14:37.602 "nguid": "C5F0F277F3984C40A016DCA53ACE4505", 00:14:37.602 "uuid": "c5f0f277-f398-4c40-a016-dca53ace4505" 00:14:37.602 } 00:14:37.602 ] 00:14:37.602 }, 00:14:37.602 { 00:14:37.602 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:37.602 "subtype": "NVMe", 00:14:37.602 "listen_addresses": [ 00:14:37.602 { 00:14:37.602 "trtype": "TCP", 00:14:37.602 "adrfam": "IPv4", 00:14:37.602 "traddr": "10.0.0.2", 00:14:37.602 "trsvcid": "4420" 00:14:37.602 } 00:14:37.602 ], 00:14:37.602 "allow_any_host": true, 00:14:37.602 "hosts": [], 00:14:37.602 "serial_number": "SPDK00000000000003", 00:14:37.602 "model_number": "SPDK bdev Controller", 00:14:37.602 "max_namespaces": 32, 00:14:37.602 "min_cntlid": 1, 00:14:37.602 "max_cntlid": 65519, 00:14:37.602 "namespaces": [ 00:14:37.602 { 00:14:37.602 "nsid": 1, 00:14:37.602 "bdev_name": "Null3", 00:14:37.602 "name": "Null3", 00:14:37.602 "nguid": "2C9689445B734C7EADB336A6CFC8837B", 00:14:37.602 "uuid": "2c968944-5b73-4c7e-adb3-36a6cfc8837b" 00:14:37.602 } 00:14:37.602 ] 00:14:37.602 }, 00:14:37.602 { 00:14:37.602 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:37.602 "subtype": "NVMe", 00:14:37.602 "listen_addresses": [ 00:14:37.602 { 00:14:37.602 "trtype": "TCP", 00:14:37.602 "adrfam": "IPv4", 00:14:37.602 "traddr": "10.0.0.2", 00:14:37.602 "trsvcid": "4420" 00:14:37.602 } 00:14:37.602 ], 00:14:37.602 "allow_any_host": true, 00:14:37.602 "hosts": [], 00:14:37.602 "serial_number": "SPDK00000000000004", 00:14:37.602 "model_number": "SPDK bdev Controller", 00:14:37.602 "max_namespaces": 32, 00:14:37.602 "min_cntlid": 1, 00:14:37.602 "max_cntlid": 65519, 00:14:37.602 "namespaces": [ 00:14:37.602 { 00:14:37.602 "nsid": 1, 00:14:37.602 "bdev_name": "Null4", 00:14:37.602 "name": "Null4", 00:14:37.602 "nguid": "C39FD3ED2BF04D34810566320D6373E1", 00:14:37.602 "uuid": "c39fd3ed-2bf0-4d34-8105-66320d6373e1" 00:14:37.602 } 00:14:37.602 ] 00:14:37.602 } 00:14:37.602 ] 00:14:37.602 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.602 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:37.602 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:37.602 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.602 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.602 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.602 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.603 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:37.603 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.603 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:37.861 rmmod nvme_tcp 00:14:37.861 rmmod nvme_fabrics 00:14:37.861 rmmod nvme_keyring 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1764535 ']' 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1764535 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1764535 ']' 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1764535 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1764535 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1764535' 00:14:37.861 killing process with pid 1764535 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1764535 00:14:37.861 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1764535 00:14:38.120 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:38.120 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:38.120 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:38.120 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.120 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:38.120 02:20:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.120 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.120 02:20:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.029 02:20:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:40.029 00:14:40.029 real 0m4.843s 00:14:40.029 user 0m3.945s 00:14:40.029 sys 0m1.490s 00:14:40.029 02:20:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:40.029 02:20:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:40.029 ************************************ 00:14:40.029 END TEST nvmf_target_discovery 00:14:40.029 ************************************ 00:14:40.029 02:20:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:40.029 02:20:30 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:40.029 02:20:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:40.029 02:20:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.029 02:20:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:40.288 ************************************ 00:14:40.288 START TEST nvmf_referrals 00:14:40.288 ************************************ 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:40.288 * Looking for test storage... 00:14:40.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:14:40.288 02:20:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:42.193 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:42.193 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:42.193 Found net devices under 0000:08:00.0: cvl_0_0 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.193 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:42.194 Found net devices under 0000:08:00.1: cvl_0_1 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:42.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:14:42.194 00:14:42.194 --- 10.0.0.2 ping statistics --- 00:14:42.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.194 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:14:42.194 00:14:42.194 --- 10.0.0.1 ping statistics --- 00:14:42.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.194 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1766153 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1766153 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1766153 ']' 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.194 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.194 [2024-07-11 02:20:32.352581] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:42.194 [2024-07-11 02:20:32.352672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.194 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.194 [2024-07-11 02:20:32.416389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.194 [2024-07-11 02:20:32.503774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.194 [2024-07-11 02:20:32.503833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.194 [2024-07-11 02:20:32.503850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.194 [2024-07-11 02:20:32.503865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.194 [2024-07-11 02:20:32.503878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.194 [2024-07-11 02:20:32.503955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.194 [2024-07-11 02:20:32.504040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.194 [2024-07-11 02:20:32.504121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.194 [2024-07-11 02:20:32.504125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.452 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:42.452 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:14:42.452 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:42.452 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:42.452 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.452 02:20:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.453 [2024-07-11 02:20:32.647269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.453 [2024-07-11 02:20:32.659462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:42.453 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:42.711 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:42.712 02:20:32 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:42.712 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:42.970 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:43.228 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:43.228 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:43.228 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:43.228 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:43.228 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.228 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:43.485 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.486 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:43.744 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:43.744 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:43.744 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:43.744 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:43.744 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.744 02:20:33 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:43.744 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:44.003 rmmod nvme_tcp 00:14:44.003 rmmod nvme_fabrics 00:14:44.003 rmmod nvme_keyring 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1766153 ']' 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1766153 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1766153 ']' 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1766153 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1766153 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1766153' 00:14:44.003 killing process with pid 1766153 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1766153 00:14:44.003 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1766153 00:14:44.264 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:44.264 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:44.264 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:44.264 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.264 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:44.264 02:20:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.264 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.264 02:20:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.173 02:20:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:46.173 00:14:46.173 real 0m6.077s 00:14:46.173 user 0m9.258s 00:14:46.173 sys 0m1.860s 00:14:46.173 02:20:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.173 02:20:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:46.173 ************************************ 00:14:46.173 END TEST nvmf_referrals 00:14:46.173 ************************************ 00:14:46.173 02:20:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:46.173 02:20:36 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:46.173 02:20:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:46.173 02:20:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.173 02:20:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.173 ************************************ 00:14:46.173 START TEST nvmf_connect_disconnect 00:14:46.173 ************************************ 00:14:46.173 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:46.432 * Looking for test storage... 00:14:46.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.432 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:14:46.433 02:20:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:48.341 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:48.341 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:48.341 Found net devices under 0000:08:00.0: cvl_0_0 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:48.341 Found net devices under 0000:08:00.1: cvl_0_1 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:48.341 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:48.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:14:48.342 00:14:48.342 --- 10.0.0.2 ping statistics --- 00:14:48.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.342 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:14:48.342 00:14:48.342 --- 10.0.0.1 ping statistics --- 00:14:48.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.342 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1767856 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1767856 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1767856 ']' 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.342 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:48.342 [2024-07-11 02:20:38.544059] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:14:48.342 [2024-07-11 02:20:38.544148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.342 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.342 [2024-07-11 02:20:38.608172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.342 [2024-07-11 02:20:38.695682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.342 [2024-07-11 02:20:38.695739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.342 [2024-07-11 02:20:38.695756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.342 [2024-07-11 02:20:38.695771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.342 [2024-07-11 02:20:38.695784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.342 [2024-07-11 02:20:38.698532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.342 [2024-07-11 02:20:38.698621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.342 [2024-07-11 02:20:38.698783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.342 [2024-07-11 02:20:38.698818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:48.601 [2024-07-11 02:20:38.835252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:48.601 [2024-07-11 02:20:38.884001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:14:48.601 02:20:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:51.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.378 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:38.378 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:38.378 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:38.378 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:18:38.378 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:38.378 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:18:38.378 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.378 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:38.378 rmmod nvme_tcp 00:18:38.637 rmmod nvme_fabrics 00:18:38.637 rmmod nvme_keyring 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1767856 ']' 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1767856 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1767856 ']' 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1767856 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1767856 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1767856' 00:18:38.637 killing process with pid 1767856 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1767856 00:18:38.637 02:24:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1767856 00:18:38.637 02:24:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:38.637 02:24:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:38.637 02:24:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:38.637 02:24:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:38.637 02:24:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:38.637 02:24:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.637 02:24:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.637 02:24:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.178 02:24:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:41.178 00:18:41.178 real 3m54.501s 00:18:41.178 user 14m53.664s 00:18:41.178 sys 0m32.869s 00:18:41.178 02:24:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.178 02:24:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:41.178 ************************************ 00:18:41.178 END TEST nvmf_connect_disconnect 00:18:41.178 ************************************ 00:18:41.178 02:24:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:41.178 02:24:31 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:41.178 02:24:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:41.178 02:24:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.178 02:24:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.178 ************************************ 00:18:41.178 START TEST nvmf_multitarget 00:18:41.178 ************************************ 00:18:41.178 02:24:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:41.178 * Looking for test storage... 00:18:41.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.178 02:24:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:18:41.179 02:24:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:42.589 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:42.589 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:42.589 Found net devices under 0000:08:00.0: cvl_0_0 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:42.589 Found net devices under 0000:08:00.1: cvl_0_1 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.589 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:42.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:18:42.590 00:18:42.590 --- 10.0.0.2 ping statistics --- 00:18:42.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.590 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:18:42.590 00:18:42.590 --- 10.0.0.1 ping statistics --- 00:18:42.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.590 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1794225 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1794225 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1794225 ']' 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.590 02:24:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:42.590 [2024-07-11 02:24:32.955651] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:18:42.590 [2024-07-11 02:24:32.955758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.590 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.848 [2024-07-11 02:24:33.021224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.848 [2024-07-11 02:24:33.113013] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.848 [2024-07-11 02:24:33.113075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.848 [2024-07-11 02:24:33.113091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.848 [2024-07-11 02:24:33.113104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.848 [2024-07-11 02:24:33.113116] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.848 [2024-07-11 02:24:33.113201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.848 [2024-07-11 02:24:33.113253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.848 [2024-07-11 02:24:33.113304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.848 [2024-07-11 02:24:33.113307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.849 02:24:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.849 02:24:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:18:42.849 02:24:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.849 02:24:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:42.849 02:24:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:42.849 02:24:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.849 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:42.849 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:42.849 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:43.107 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:43.107 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:43.107 "nvmf_tgt_1" 00:18:43.107 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:43.363 "nvmf_tgt_2" 00:18:43.363 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:43.363 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:43.363 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:43.363 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:43.620 true 00:18:43.621 02:24:33 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:43.621 true 00:18:43.621 02:24:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:43.621 02:24:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.879 rmmod nvme_tcp 00:18:43.879 rmmod nvme_fabrics 00:18:43.879 rmmod nvme_keyring 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1794225 ']' 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1794225 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1794225 ']' 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1794225 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1794225 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1794225' 00:18:43.879 killing process with pid 1794225 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1794225 00:18:43.879 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1794225 00:18:44.139 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.139 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.139 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.139 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.139 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.139 02:24:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.139 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.139 02:24:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.045 02:24:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.045 00:18:46.045 real 0m5.322s 00:18:46.045 user 0m6.594s 00:18:46.045 sys 0m1.672s 00:18:46.045 02:24:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.045 02:24:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:46.303 ************************************ 00:18:46.303 END TEST nvmf_multitarget 00:18:46.303 ************************************ 00:18:46.303 02:24:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:46.303 02:24:36 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:46.303 02:24:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:46.303 02:24:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.303 02:24:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:46.303 ************************************ 00:18:46.303 START TEST nvmf_rpc 00:18:46.303 ************************************ 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:46.303 * Looking for test storage... 00:18:46.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.303 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.304 02:24:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:48.202 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:48.202 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:48.202 Found net devices under 0000:08:00.0: cvl_0_0 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:48.202 Found net devices under 0000:08:00.1: cvl_0_1 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:48.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:18:48.202 00:18:48.202 --- 10.0.0.2 ping statistics --- 00:18:48.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.202 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:18:48.202 00:18:48.202 --- 10.0.0.1 ping statistics --- 00:18:48.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.202 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1795848 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1795848 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1795848 ']' 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.202 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.203 [2024-07-11 02:24:38.353385] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:18:48.203 [2024-07-11 02:24:38.353475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.203 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.203 [2024-07-11 02:24:38.417336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.203 [2024-07-11 02:24:38.505054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.203 [2024-07-11 02:24:38.505117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.203 [2024-07-11 02:24:38.505133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.203 [2024-07-11 02:24:38.505147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.203 [2024-07-11 02:24:38.505160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.203 [2024-07-11 02:24:38.505243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.203 [2024-07-11 02:24:38.505295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.203 [2024-07-11 02:24:38.505345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.203 [2024-07-11 02:24:38.505348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.203 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.203 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:18:48.203 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.203 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.203 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:48.460 "tick_rate": 2700000000, 00:18:48.460 "poll_groups": [ 00:18:48.460 { 00:18:48.460 "name": "nvmf_tgt_poll_group_000", 00:18:48.460 "admin_qpairs": 0, 00:18:48.460 "io_qpairs": 0, 00:18:48.460 "current_admin_qpairs": 0, 00:18:48.460 "current_io_qpairs": 0, 00:18:48.460 "pending_bdev_io": 0, 00:18:48.460 "completed_nvme_io": 0, 00:18:48.460 "transports": [] 00:18:48.460 }, 00:18:48.460 { 00:18:48.460 "name": "nvmf_tgt_poll_group_001", 00:18:48.460 "admin_qpairs": 0, 00:18:48.460 "io_qpairs": 0, 00:18:48.460 "current_admin_qpairs": 0, 00:18:48.460 "current_io_qpairs": 0, 00:18:48.460 "pending_bdev_io": 0, 00:18:48.460 "completed_nvme_io": 0, 00:18:48.460 "transports": [] 00:18:48.460 }, 00:18:48.460 { 00:18:48.460 "name": "nvmf_tgt_poll_group_002", 00:18:48.460 "admin_qpairs": 0, 00:18:48.460 "io_qpairs": 0, 00:18:48.460 "current_admin_qpairs": 0, 00:18:48.460 "current_io_qpairs": 0, 00:18:48.460 "pending_bdev_io": 0, 00:18:48.460 "completed_nvme_io": 0, 00:18:48.460 "transports": [] 00:18:48.460 }, 00:18:48.460 { 00:18:48.460 "name": "nvmf_tgt_poll_group_003", 00:18:48.460 "admin_qpairs": 0, 00:18:48.460 "io_qpairs": 0, 00:18:48.460 "current_admin_qpairs": 0, 00:18:48.460 "current_io_qpairs": 0, 00:18:48.460 "pending_bdev_io": 0, 00:18:48.460 "completed_nvme_io": 0, 00:18:48.460 "transports": [] 00:18:48.460 } 00:18:48.460 ] 00:18:48.460 }' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.460 [2024-07-11 02:24:38.731408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:48.460 "tick_rate": 2700000000, 00:18:48.460 "poll_groups": [ 00:18:48.460 { 00:18:48.460 "name": "nvmf_tgt_poll_group_000", 00:18:48.460 "admin_qpairs": 0, 00:18:48.460 "io_qpairs": 0, 00:18:48.460 "current_admin_qpairs": 0, 00:18:48.460 "current_io_qpairs": 0, 00:18:48.460 "pending_bdev_io": 0, 00:18:48.460 "completed_nvme_io": 0, 00:18:48.460 "transports": [ 00:18:48.460 { 00:18:48.460 "trtype": "TCP" 00:18:48.460 } 00:18:48.460 ] 00:18:48.460 }, 00:18:48.460 { 00:18:48.460 "name": "nvmf_tgt_poll_group_001", 00:18:48.460 "admin_qpairs": 0, 00:18:48.460 "io_qpairs": 0, 00:18:48.460 "current_admin_qpairs": 0, 00:18:48.460 "current_io_qpairs": 0, 00:18:48.460 "pending_bdev_io": 0, 00:18:48.460 "completed_nvme_io": 0, 00:18:48.460 "transports": [ 00:18:48.460 { 00:18:48.460 "trtype": "TCP" 00:18:48.460 } 00:18:48.460 ] 00:18:48.460 }, 00:18:48.460 { 00:18:48.460 "name": "nvmf_tgt_poll_group_002", 00:18:48.460 "admin_qpairs": 0, 00:18:48.460 "io_qpairs": 0, 00:18:48.460 "current_admin_qpairs": 0, 00:18:48.460 "current_io_qpairs": 0, 00:18:48.460 "pending_bdev_io": 0, 00:18:48.460 "completed_nvme_io": 0, 00:18:48.460 "transports": [ 00:18:48.460 { 00:18:48.460 "trtype": "TCP" 00:18:48.460 } 00:18:48.460 ] 00:18:48.460 }, 00:18:48.460 { 00:18:48.460 "name": "nvmf_tgt_poll_group_003", 00:18:48.460 "admin_qpairs": 0, 00:18:48.460 "io_qpairs": 0, 00:18:48.460 "current_admin_qpairs": 0, 00:18:48.460 "current_io_qpairs": 0, 00:18:48.460 "pending_bdev_io": 0, 00:18:48.460 "completed_nvme_io": 0, 00:18:48.460 "transports": [ 00:18:48.460 { 00:18:48.460 "trtype": "TCP" 00:18:48.460 } 00:18:48.460 ] 00:18:48.460 } 00:18:48.460 ] 00:18:48.460 }' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.460 Malloc1 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.460 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.717 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.717 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.717 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.717 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.717 [2024-07-11 02:24:38.885835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.717 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.717 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:18:48.717 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:18:48.718 [2024-07-11 02:24:38.908325] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:18:48.718 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:48.718 could not add new controller: failed to write to nvme-fabrics device 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.718 02:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:48.975 02:24:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:48.975 02:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.975 02:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.975 02:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:48.975 02:24:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:51.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:18:51.504 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:51.505 [2024-07-11 02:24:41.525931] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:18:51.505 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:51.505 could not add new controller: failed to write to nvme-fabrics device 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.505 02:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:51.763 02:24:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.763 02:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:51.763 02:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.763 02:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:51.763 02:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:53.659 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:53.659 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:53.659 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:53.917 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:53.918 [2024-07-11 02:24:44.199697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.918 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:54.485 02:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:54.485 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:54.485 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.485 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:54.485 02:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:56.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:56.385 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.641 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.642 [2024-07-11 02:24:46.848306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.642 02:24:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:57.207 02:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:57.207 02:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:57.207 02:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.207 02:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:57.207 02:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:59.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.107 [2024-07-11 02:24:49.507970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.107 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:59.672 02:24:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:59.672 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:18:59.672 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.672 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:59.672 02:24:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:01.571 02:24:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:01.571 02:24:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:01.571 02:24:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:01.571 02:24:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:01.571 02:24:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.571 02:24:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:01.571 02:24:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:01.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:01.828 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:01.828 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:01.829 [2024-07-11 02:24:52.073297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.829 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.393 02:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:02.393 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:02.393 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.393 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:02.393 02:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:04.291 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:04.291 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:04.291 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.291 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.292 [2024-07-11 02:24:54.650107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.292 02:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:04.908 02:24:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:04.908 02:24:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:19:04.908 02:24:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.908 02:24:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:04.908 02:24:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:06.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.806 [2024-07-11 02:24:57.222584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.806 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 [2024-07-11 02:24:57.270626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 [2024-07-11 02:24:57.318761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 [2024-07-11 02:24:57.366930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.065 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.066 [2024-07-11 02:24:57.415112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:19:07.066 "tick_rate": 2700000000, 00:19:07.066 "poll_groups": [ 00:19:07.066 { 00:19:07.066 "name": "nvmf_tgt_poll_group_000", 00:19:07.066 "admin_qpairs": 2, 00:19:07.066 "io_qpairs": 56, 00:19:07.066 "current_admin_qpairs": 0, 00:19:07.066 "current_io_qpairs": 0, 00:19:07.066 "pending_bdev_io": 0, 00:19:07.066 "completed_nvme_io": 199, 00:19:07.066 "transports": [ 00:19:07.066 { 00:19:07.066 "trtype": "TCP" 00:19:07.066 } 00:19:07.066 ] 00:19:07.066 }, 00:19:07.066 { 00:19:07.066 "name": "nvmf_tgt_poll_group_001", 00:19:07.066 "admin_qpairs": 2, 00:19:07.066 "io_qpairs": 56, 00:19:07.066 "current_admin_qpairs": 0, 00:19:07.066 "current_io_qpairs": 0, 00:19:07.066 "pending_bdev_io": 0, 00:19:07.066 "completed_nvme_io": 152, 00:19:07.066 "transports": [ 00:19:07.066 { 00:19:07.066 "trtype": "TCP" 00:19:07.066 } 00:19:07.066 ] 00:19:07.066 }, 00:19:07.066 { 00:19:07.066 "name": "nvmf_tgt_poll_group_002", 00:19:07.066 "admin_qpairs": 1, 00:19:07.066 "io_qpairs": 56, 00:19:07.066 "current_admin_qpairs": 0, 00:19:07.066 "current_io_qpairs": 0, 00:19:07.066 "pending_bdev_io": 0, 00:19:07.066 "completed_nvme_io": 63, 00:19:07.066 "transports": [ 00:19:07.066 { 00:19:07.066 "trtype": "TCP" 00:19:07.066 } 00:19:07.066 ] 00:19:07.066 }, 00:19:07.066 { 00:19:07.066 "name": "nvmf_tgt_poll_group_003", 00:19:07.066 "admin_qpairs": 2, 00:19:07.066 "io_qpairs": 56, 00:19:07.066 "current_admin_qpairs": 0, 00:19:07.066 "current_io_qpairs": 0, 00:19:07.066 "pending_bdev_io": 0, 00:19:07.066 "completed_nvme_io": 160, 00:19:07.066 "transports": [ 00:19:07.066 { 00:19:07.066 "trtype": "TCP" 00:19:07.066 } 00:19:07.066 ] 00:19:07.066 } 00:19:07.066 ] 00:19:07.066 }' 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:07.066 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.324 rmmod nvme_tcp 00:19:07.324 rmmod nvme_fabrics 00:19:07.324 rmmod nvme_keyring 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1795848 ']' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1795848 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1795848 ']' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1795848 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1795848 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:07.324 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1795848' 00:19:07.324 killing process with pid 1795848 00:19:07.325 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1795848 00:19:07.325 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1795848 00:19:07.585 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.585 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:07.585 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:07.585 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.585 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.585 02:24:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.585 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.585 02:24:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.491 02:24:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:09.491 00:19:09.491 real 0m23.350s 00:19:09.491 user 1m16.248s 00:19:09.491 sys 0m3.737s 00:19:09.491 02:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:09.491 02:24:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.491 ************************************ 00:19:09.491 END TEST nvmf_rpc 00:19:09.491 ************************************ 00:19:09.491 02:24:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:09.491 02:24:59 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:09.491 02:24:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:09.491 02:24:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.491 02:24:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:09.750 ************************************ 00:19:09.750 START TEST nvmf_invalid 00:19:09.750 ************************************ 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:19:09.750 * Looking for test storage... 00:19:09.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.750 02:24:59 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:19:09.751 02:24:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:11.657 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:11.657 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:11.657 Found net devices under 0000:08:00.0: cvl_0_0 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:11.657 Found net devices under 0000:08:00.1: cvl_0_1 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:11.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:19:11.657 00:19:11.657 --- 10.0.0.2 ping statistics --- 00:19:11.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.657 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:19:11.657 00:19:11.657 --- 10.0.0.1 ping statistics --- 00:19:11.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.657 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1799224 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1799224 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1799224 ']' 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:11.657 02:25:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:11.658 [2024-07-11 02:25:01.768036] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:19:11.658 [2024-07-11 02:25:01.768127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.658 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.658 [2024-07-11 02:25:01.832261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.658 [2024-07-11 02:25:01.919840] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.658 [2024-07-11 02:25:01.919899] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.658 [2024-07-11 02:25:01.919917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.658 [2024-07-11 02:25:01.919930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.658 [2024-07-11 02:25:01.919943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.658 [2024-07-11 02:25:01.920028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.658 [2024-07-11 02:25:01.920109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.658 [2024-07-11 02:25:01.920157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.658 [2024-07-11 02:25:01.920160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.658 02:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.658 02:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:19:11.658 02:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:11.658 02:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:11.658 02:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:11.658 02:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.658 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:11.658 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24490 00:19:12.224 [2024-07-11 02:25:02.348006] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:12.224 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:19:12.224 { 00:19:12.224 "nqn": "nqn.2016-06.io.spdk:cnode24490", 00:19:12.224 "tgt_name": "foobar", 00:19:12.224 "method": "nvmf_create_subsystem", 00:19:12.224 "req_id": 1 00:19:12.224 } 00:19:12.224 Got JSON-RPC error response 00:19:12.224 response: 00:19:12.224 { 00:19:12.224 "code": -32603, 00:19:12.224 "message": "Unable to find target foobar" 00:19:12.224 }' 00:19:12.224 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:19:12.224 { 00:19:12.224 "nqn": "nqn.2016-06.io.spdk:cnode24490", 00:19:12.224 "tgt_name": "foobar", 00:19:12.224 "method": "nvmf_create_subsystem", 00:19:12.224 "req_id": 1 00:19:12.224 } 00:19:12.224 Got JSON-RPC error response 00:19:12.224 response: 00:19:12.224 { 00:19:12.224 "code": -32603, 00:19:12.224 "message": "Unable to find target foobar" 00:19:12.224 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:12.224 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:12.224 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3569 00:19:12.482 [2024-07-11 02:25:02.649042] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3569: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:12.482 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:19:12.482 { 00:19:12.482 "nqn": "nqn.2016-06.io.spdk:cnode3569", 00:19:12.482 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:12.482 "method": "nvmf_create_subsystem", 00:19:12.482 "req_id": 1 00:19:12.482 } 00:19:12.482 Got JSON-RPC error response 00:19:12.482 response: 00:19:12.482 { 00:19:12.482 "code": -32602, 00:19:12.482 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:12.482 }' 00:19:12.482 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:19:12.482 { 00:19:12.482 "nqn": "nqn.2016-06.io.spdk:cnode3569", 00:19:12.482 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:12.482 "method": "nvmf_create_subsystem", 00:19:12.482 "req_id": 1 00:19:12.482 } 00:19:12.482 Got JSON-RPC error response 00:19:12.482 response: 00:19:12.482 { 00:19:12.482 "code": -32602, 00:19:12.482 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:12.482 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:12.482 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:12.482 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27781 00:19:12.740 [2024-07-11 02:25:02.950041] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27781: invalid model number 'SPDK_Controller' 00:19:12.740 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:19:12.740 { 00:19:12.740 "nqn": "nqn.2016-06.io.spdk:cnode27781", 00:19:12.740 "model_number": "SPDK_Controller\u001f", 00:19:12.740 "method": "nvmf_create_subsystem", 00:19:12.740 "req_id": 1 00:19:12.740 } 00:19:12.741 Got JSON-RPC error response 00:19:12.741 response: 00:19:12.741 { 00:19:12.741 "code": -32602, 00:19:12.741 "message": "Invalid MN SPDK_Controller\u001f" 00:19:12.741 }' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:19:12.741 { 00:19:12.741 "nqn": "nqn.2016-06.io.spdk:cnode27781", 00:19:12.741 "model_number": "SPDK_Controller\u001f", 00:19:12.741 "method": "nvmf_create_subsystem", 00:19:12.741 "req_id": 1 00:19:12.741 } 00:19:12.741 Got JSON-RPC error response 00:19:12.741 response: 00:19:12.741 { 00:19:12.741 "code": -32602, 00:19:12.741 "message": "Invalid MN SPDK_Controller\u001f" 00:19:12.741 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:19:12.741 02:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ';g'\''I7{V\5N9`'\''oeWS1,Ta' 00:19:12.741 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ';g'\''I7{V\5N9`'\''oeWS1,Ta' nqn.2016-06.io.spdk:cnode14087 00:19:13.001 [2024-07-11 02:25:03.283087] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14087: invalid serial number ';g'I7{V\5N9`'oeWS1,Ta' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:19:13.001 { 00:19:13.001 "nqn": "nqn.2016-06.io.spdk:cnode14087", 00:19:13.001 "serial_number": ";g'\''I7{V\\5N9`'\''oeWS1,Ta", 00:19:13.001 "method": "nvmf_create_subsystem", 00:19:13.001 "req_id": 1 00:19:13.001 } 00:19:13.001 Got JSON-RPC error response 00:19:13.001 response: 00:19:13.001 { 00:19:13.001 "code": -32602, 00:19:13.001 "message": "Invalid SN ;g'\''I7{V\\5N9`'\''oeWS1,Ta" 00:19:13.001 }' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:19:13.001 { 00:19:13.001 "nqn": "nqn.2016-06.io.spdk:cnode14087", 00:19:13.001 "serial_number": ";g'I7{V\\5N9`'oeWS1,Ta", 00:19:13.001 "method": "nvmf_create_subsystem", 00:19:13.001 "req_id": 1 00:19:13.001 } 00:19:13.001 Got JSON-RPC error response 00:19:13.001 response: 00:19:13.001 { 00:19:13.001 "code": -32602, 00:19:13.001 "message": "Invalid SN ;g'I7{V\\5N9`'oeWS1,Ta" 00:19:13.001 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.001 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.002 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ : == \- ]] 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ':x;oe*g5@@]|l/{A9y?[pnG[/:|o&"[?&,wW+m+Rd' 00:19:13.261 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ':x;oe*g5@@]|l/{A9y?[pnG[/:|o&"[?&,wW+m+Rd' nqn.2016-06.io.spdk:cnode30098 00:19:13.519 [2024-07-11 02:25:03.716486] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30098: invalid model number ':x;oe*g5@@]|l/{A9y?[pnG[/:|o&"[?&,wW+m+Rd' 00:19:13.519 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:19:13.519 { 00:19:13.519 "nqn": "nqn.2016-06.io.spdk:cnode30098", 00:19:13.519 "model_number": ":x;oe*g5@@]|l/{A9y?[pnG[/:|o&\"[?&,wW+m+Rd", 00:19:13.519 "method": "nvmf_create_subsystem", 00:19:13.519 "req_id": 1 00:19:13.520 } 00:19:13.520 Got JSON-RPC error response 00:19:13.520 response: 00:19:13.520 { 00:19:13.520 "code": -32602, 00:19:13.520 "message": "Invalid MN :x;oe*g5@@]|l/{A9y?[pnG[/:|o&\"[?&,wW+m+Rd" 00:19:13.520 }' 00:19:13.520 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:19:13.520 { 00:19:13.520 "nqn": "nqn.2016-06.io.spdk:cnode30098", 00:19:13.520 "model_number": ":x;oe*g5@@]|l/{A9y?[pnG[/:|o&\"[?&,wW+m+Rd", 00:19:13.520 "method": "nvmf_create_subsystem", 00:19:13.520 "req_id": 1 00:19:13.520 } 00:19:13.520 Got JSON-RPC error response 00:19:13.520 response: 00:19:13.520 { 00:19:13.520 "code": -32602, 00:19:13.520 "message": "Invalid MN :x;oe*g5@@]|l/{A9y?[pnG[/:|o&\"[?&,wW+m+Rd" 00:19:13.520 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:13.520 02:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:19:13.778 [2024-07-11 02:25:04.017580] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.778 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:19:14.036 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:19:14.036 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:19:14.036 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:19:14.036 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:19:14.036 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:19:14.295 [2024-07-11 02:25:04.627604] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:19:14.295 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:19:14.295 { 00:19:14.295 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:14.295 "listen_address": { 00:19:14.295 "trtype": "tcp", 00:19:14.295 "traddr": "", 00:19:14.295 "trsvcid": "4421" 00:19:14.295 }, 00:19:14.295 "method": "nvmf_subsystem_remove_listener", 00:19:14.295 "req_id": 1 00:19:14.295 } 00:19:14.295 Got JSON-RPC error response 00:19:14.295 response: 00:19:14.295 { 00:19:14.295 "code": -32602, 00:19:14.295 "message": "Invalid parameters" 00:19:14.295 }' 00:19:14.295 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:19:14.295 { 00:19:14.295 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:14.295 "listen_address": { 00:19:14.295 "trtype": "tcp", 00:19:14.295 "traddr": "", 00:19:14.295 "trsvcid": "4421" 00:19:14.295 }, 00:19:14.295 "method": "nvmf_subsystem_remove_listener", 00:19:14.295 "req_id": 1 00:19:14.295 } 00:19:14.295 Got JSON-RPC error response 00:19:14.295 response: 00:19:14.295 { 00:19:14.295 "code": -32602, 00:19:14.295 "message": "Invalid parameters" 00:19:14.295 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:19:14.295 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31456 -i 0 00:19:14.553 [2024-07-11 02:25:04.924468] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31456: invalid cntlid range [0-65519] 00:19:14.553 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:19:14.553 { 00:19:14.553 "nqn": "nqn.2016-06.io.spdk:cnode31456", 00:19:14.553 "min_cntlid": 0, 00:19:14.553 "method": "nvmf_create_subsystem", 00:19:14.553 "req_id": 1 00:19:14.553 } 00:19:14.553 Got JSON-RPC error response 00:19:14.553 response: 00:19:14.553 { 00:19:14.553 "code": -32602, 00:19:14.553 "message": "Invalid cntlid range [0-65519]" 00:19:14.553 }' 00:19:14.553 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:19:14.553 { 00:19:14.553 "nqn": "nqn.2016-06.io.spdk:cnode31456", 00:19:14.553 "min_cntlid": 0, 00:19:14.553 "method": "nvmf_create_subsystem", 00:19:14.553 "req_id": 1 00:19:14.553 } 00:19:14.553 Got JSON-RPC error response 00:19:14.553 response: 00:19:14.553 { 00:19:14.553 "code": -32602, 00:19:14.553 "message": "Invalid cntlid range [0-65519]" 00:19:14.553 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:14.553 02:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21694 -i 65520 00:19:14.812 [2024-07-11 02:25:05.221492] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21694: invalid cntlid range [65520-65519] 00:19:15.071 02:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:19:15.071 { 00:19:15.071 "nqn": "nqn.2016-06.io.spdk:cnode21694", 00:19:15.071 "min_cntlid": 65520, 00:19:15.071 "method": "nvmf_create_subsystem", 00:19:15.071 "req_id": 1 00:19:15.071 } 00:19:15.071 Got JSON-RPC error response 00:19:15.071 response: 00:19:15.071 { 00:19:15.071 "code": -32602, 00:19:15.071 "message": "Invalid cntlid range [65520-65519]" 00:19:15.071 }' 00:19:15.071 02:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:19:15.071 { 00:19:15.071 "nqn": "nqn.2016-06.io.spdk:cnode21694", 00:19:15.071 "min_cntlid": 65520, 00:19:15.071 "method": "nvmf_create_subsystem", 00:19:15.071 "req_id": 1 00:19:15.071 } 00:19:15.071 Got JSON-RPC error response 00:19:15.071 response: 00:19:15.071 { 00:19:15.071 "code": -32602, 00:19:15.071 "message": "Invalid cntlid range [65520-65519]" 00:19:15.071 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:15.071 02:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10601 -I 0 00:19:15.329 [2024-07-11 02:25:05.522465] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10601: invalid cntlid range [1-0] 00:19:15.329 02:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:19:15.329 { 00:19:15.329 "nqn": "nqn.2016-06.io.spdk:cnode10601", 00:19:15.329 "max_cntlid": 0, 00:19:15.329 "method": "nvmf_create_subsystem", 00:19:15.329 "req_id": 1 00:19:15.329 } 00:19:15.329 Got JSON-RPC error response 00:19:15.329 response: 00:19:15.329 { 00:19:15.329 "code": -32602, 00:19:15.329 "message": "Invalid cntlid range [1-0]" 00:19:15.329 }' 00:19:15.329 02:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:19:15.329 { 00:19:15.329 "nqn": "nqn.2016-06.io.spdk:cnode10601", 00:19:15.329 "max_cntlid": 0, 00:19:15.329 "method": "nvmf_create_subsystem", 00:19:15.329 "req_id": 1 00:19:15.329 } 00:19:15.329 Got JSON-RPC error response 00:19:15.329 response: 00:19:15.329 { 00:19:15.329 "code": -32602, 00:19:15.329 "message": "Invalid cntlid range [1-0]" 00:19:15.329 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:15.329 02:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12343 -I 65520 00:19:15.586 [2024-07-11 02:25:05.823492] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12343: invalid cntlid range [1-65520] 00:19:15.586 02:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:19:15.586 { 00:19:15.586 "nqn": "nqn.2016-06.io.spdk:cnode12343", 00:19:15.586 "max_cntlid": 65520, 00:19:15.586 "method": "nvmf_create_subsystem", 00:19:15.587 "req_id": 1 00:19:15.587 } 00:19:15.587 Got JSON-RPC error response 00:19:15.587 response: 00:19:15.587 { 00:19:15.587 "code": -32602, 00:19:15.587 "message": "Invalid cntlid range [1-65520]" 00:19:15.587 }' 00:19:15.587 02:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:19:15.587 { 00:19:15.587 "nqn": "nqn.2016-06.io.spdk:cnode12343", 00:19:15.587 "max_cntlid": 65520, 00:19:15.587 "method": "nvmf_create_subsystem", 00:19:15.587 "req_id": 1 00:19:15.587 } 00:19:15.587 Got JSON-RPC error response 00:19:15.587 response: 00:19:15.587 { 00:19:15.587 "code": -32602, 00:19:15.587 "message": "Invalid cntlid range [1-65520]" 00:19:15.587 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:15.587 02:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10910 -i 6 -I 5 00:19:15.845 [2024-07-11 02:25:06.120456] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10910: invalid cntlid range [6-5] 00:19:15.845 02:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:19:15.845 { 00:19:15.845 "nqn": "nqn.2016-06.io.spdk:cnode10910", 00:19:15.845 "min_cntlid": 6, 00:19:15.845 "max_cntlid": 5, 00:19:15.845 "method": "nvmf_create_subsystem", 00:19:15.845 "req_id": 1 00:19:15.845 } 00:19:15.845 Got JSON-RPC error response 00:19:15.845 response: 00:19:15.845 { 00:19:15.845 "code": -32602, 00:19:15.845 "message": "Invalid cntlid range [6-5]" 00:19:15.845 }' 00:19:15.845 02:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:19:15.845 { 00:19:15.845 "nqn": "nqn.2016-06.io.spdk:cnode10910", 00:19:15.845 "min_cntlid": 6, 00:19:15.845 "max_cntlid": 5, 00:19:15.845 "method": "nvmf_create_subsystem", 00:19:15.845 "req_id": 1 00:19:15.845 } 00:19:15.845 Got JSON-RPC error response 00:19:15.845 response: 00:19:15.846 { 00:19:15.846 "code": -32602, 00:19:15.846 "message": "Invalid cntlid range [6-5]" 00:19:15.846 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:15.846 02:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:19:16.106 { 00:19:16.106 "name": "foobar", 00:19:16.106 "method": "nvmf_delete_target", 00:19:16.106 "req_id": 1 00:19:16.106 } 00:19:16.106 Got JSON-RPC error response 00:19:16.106 response: 00:19:16.106 { 00:19:16.106 "code": -32602, 00:19:16.106 "message": "The specified target doesn'\''t exist, cannot delete it." 00:19:16.106 }' 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:19:16.106 { 00:19:16.106 "name": "foobar", 00:19:16.106 "method": "nvmf_delete_target", 00:19:16.106 "req_id": 1 00:19:16.106 } 00:19:16.106 Got JSON-RPC error response 00:19:16.106 response: 00:19:16.106 { 00:19:16.106 "code": -32602, 00:19:16.106 "message": "The specified target doesn't exist, cannot delete it." 00:19:16.106 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.106 rmmod nvme_tcp 00:19:16.106 rmmod nvme_fabrics 00:19:16.106 rmmod nvme_keyring 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1799224 ']' 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1799224 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1799224 ']' 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1799224 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1799224 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1799224' 00:19:16.106 killing process with pid 1799224 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1799224 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1799224 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.106 02:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.649 02:25:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:18.649 00:19:18.649 real 0m8.649s 00:19:18.649 user 0m22.350s 00:19:18.649 sys 0m2.174s 00:19:18.649 02:25:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:18.649 02:25:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:18.649 ************************************ 00:19:18.649 END TEST nvmf_invalid 00:19:18.649 ************************************ 00:19:18.649 02:25:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:18.649 02:25:08 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:18.649 02:25:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:18.649 02:25:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:18.649 02:25:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:18.649 ************************************ 00:19:18.649 START TEST nvmf_abort 00:19:18.649 ************************************ 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:19:18.649 * Looking for test storage... 00:19:18.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:19:18.649 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.650 02:25:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:20.027 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:20.027 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:20.027 Found net devices under 0000:08:00.0: cvl_0_0 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:20.027 Found net devices under 0000:08:00.1: cvl_0_1 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:20.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:19:20.027 00:19:20.027 --- 10.0.0.2 ping statistics --- 00:19:20.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.027 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:19:20.027 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:19:20.027 00:19:20.027 --- 10.0.0.1 ping statistics --- 00:19:20.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.028 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1801294 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1801294 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1801294 ']' 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.028 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.286 [2024-07-11 02:25:10.494889] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:19:20.286 [2024-07-11 02:25:10.494989] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.286 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.286 [2024-07-11 02:25:10.562696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.286 [2024-07-11 02:25:10.653373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.286 [2024-07-11 02:25:10.653424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.286 [2024-07-11 02:25:10.653440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.286 [2024-07-11 02:25:10.653454] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.286 [2024-07-11 02:25:10.653466] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.286 [2024-07-11 02:25:10.656532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.286 [2024-07-11 02:25:10.656602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.286 [2024-07-11 02:25:10.656613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.544 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.544 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:19:20.544 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.544 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:20.544 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.544 02:25:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.544 02:25:10 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:19:20.544 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.544 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.544 [2024-07-11 02:25:10.789808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.545 Malloc0 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.545 Delay0 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.545 [2024-07-11 02:25:10.863700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.545 02:25:10 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:19:20.545 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.801 [2024-07-11 02:25:10.970492] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:22.709 Initializing NVMe Controllers 00:19:22.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:22.709 controller IO queue size 128 less than required 00:19:22.709 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:19:22.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:22.709 Initialization complete. Launching workers. 00:19:22.709 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 26867 00:19:22.709 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26932, failed to submit 62 00:19:22.709 success 26871, unsuccess 61, failed 0 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:22.709 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:22.709 rmmod nvme_tcp 00:19:22.709 rmmod nvme_fabrics 00:19:22.966 rmmod nvme_keyring 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1801294 ']' 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1801294 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1801294 ']' 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1801294 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1801294 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1801294' 00:19:22.966 killing process with pid 1801294 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1801294 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1801294 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.966 02:25:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.500 02:25:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:25.500 00:19:25.500 real 0m6.792s 00:19:25.500 user 0m10.306s 00:19:25.500 sys 0m2.201s 00:19:25.500 02:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.500 02:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:25.500 ************************************ 00:19:25.500 END TEST nvmf_abort 00:19:25.500 ************************************ 00:19:25.500 02:25:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:25.500 02:25:15 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:25.500 02:25:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:25.500 02:25:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.500 02:25:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:25.500 ************************************ 00:19:25.500 START TEST nvmf_ns_hotplug_stress 00:19:25.500 ************************************ 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:25.500 * Looking for test storage... 00:19:25.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.500 02:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.884 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:26.885 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:26.885 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:26.885 Found net devices under 0000:08:00.0: cvl_0_0 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:26.885 Found net devices under 0000:08:00.1: cvl_0_1 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:26.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:19:26.885 00:19:26.885 --- 10.0.0.2 ping statistics --- 00:19:26.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.885 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:19:26.885 00:19:26.885 --- 10.0.0.1 ping statistics --- 00:19:26.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.885 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.885 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1803014 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1803014 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1803014 ']' 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.185 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:27.185 [2024-07-11 02:25:17.369892] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:19:27.185 [2024-07-11 02:25:17.369985] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.185 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.185 [2024-07-11 02:25:17.435859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:27.185 [2024-07-11 02:25:17.525441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.185 [2024-07-11 02:25:17.525504] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.185 [2024-07-11 02:25:17.525529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.185 [2024-07-11 02:25:17.525543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.185 [2024-07-11 02:25:17.525555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.185 [2024-07-11 02:25:17.525651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.185 [2024-07-11 02:25:17.525735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:27.185 [2024-07-11 02:25:17.525767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.444 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.444 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:19:27.444 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:27.444 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.444 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:27.444 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.444 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:19:27.444 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:27.702 [2024-07-11 02:25:17.932530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.702 02:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:27.960 02:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.218 [2024-07-11 02:25:18.524025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.218 02:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:28.476 02:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:19:28.734 Malloc0 00:19:28.991 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:29.250 Delay0 00:19:29.250 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:29.508 02:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:19:29.766 NULL1 00:19:29.766 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:30.023 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1803341 00:19:30.023 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:19:30.023 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:30.023 02:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:30.023 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.396 Read completed with error (sct=0, sc=11) 00:19:31.396 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:31.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:31.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:31.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:31.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:31.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:31.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:31.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:31.654 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:19:31.654 02:25:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:19:31.911 true 00:19:31.911 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:31.911 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.845 02:25:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:32.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:32.845 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:19:32.845 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:19:33.408 true 00:19:33.408 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:33.408 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:33.665 02:25:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:33.665 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:19:33.665 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:19:33.923 true 00:19:33.923 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:33.923 02:25:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:34.856 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:35.114 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:19:35.114 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:19:35.372 true 00:19:35.372 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:35.372 02:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:35.938 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:36.196 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:19:36.196 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:19:36.453 true 00:19:36.453 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:36.453 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:36.709 02:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:36.965 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:19:36.965 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:19:37.222 true 00:19:37.222 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:37.222 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:37.479 02:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:38.044 02:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:19:38.044 02:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:19:38.044 true 00:19:38.044 02:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:38.044 02:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:38.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:38.976 02:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:38.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:39.234 02:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:19:39.234 02:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:19:39.492 true 00:19:39.492 02:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:39.492 02:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:39.771 02:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:40.336 02:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:19:40.336 02:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:19:40.594 true 00:19:40.594 02:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:40.594 02:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:40.852 02:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.109 02:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:19:41.109 02:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:19:41.366 true 00:19:41.366 02:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:41.366 02:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:42.298 02:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:42.555 02:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:19:42.555 02:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:19:42.813 true 00:19:42.813 02:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:42.813 02:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:43.071 02:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:43.329 02:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:19:43.329 02:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:19:43.586 true 00:19:43.586 02:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:43.586 02:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:43.844 02:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:44.102 02:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:19:44.102 02:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:19:44.360 true 00:19:44.360 02:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:44.360 02:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:45.295 02:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.576 02:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:19:45.576 02:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:19:45.872 true 00:19:45.872 02:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:45.872 02:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:46.130 02:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:46.388 02:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:19:46.388 02:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:19:46.646 true 00:19:46.646 02:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:46.646 02:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:46.904 02:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:47.162 02:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:19:47.162 02:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:19:47.419 true 00:19:47.419 02:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:47.420 02:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:48.353 02:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:48.610 02:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:19:48.611 02:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:19:49.176 true 00:19:49.176 02:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:49.176 02:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.434 02:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:49.691 02:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:19:49.691 02:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:19:49.949 true 00:19:49.949 02:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:49.949 02:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:50.207 02:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:50.465 02:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:19:50.465 02:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:19:50.721 true 00:19:50.721 02:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:50.721 02:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:51.654 02:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:51.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:51.912 02:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:19:51.912 02:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:19:52.170 true 00:19:52.170 02:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:52.170 02:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.428 02:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:52.685 02:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:19:52.685 02:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:19:52.943 true 00:19:52.943 02:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:52.943 02:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:53.201 02:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:53.458 02:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:19:53.458 02:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:19:53.716 true 00:19:53.716 02:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:53.716 02:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:54.648 02:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:54.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:54.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:54.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:54.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:54.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:54.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:55.163 02:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:19:55.163 02:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:19:55.420 true 00:19:55.420 02:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:55.420 02:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.983 02:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:56.546 02:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:19:56.546 02:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:19:56.546 true 00:19:56.803 02:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:56.803 02:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.059 02:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:57.316 02:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:19:57.316 02:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:19:57.572 true 00:19:57.572 02:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:57.572 02:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:57.828 02:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:58.084 02:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:19:58.084 02:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:19:58.341 true 00:19:58.341 02:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:58.341 02:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:59.272 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:59.272 02:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:59.272 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:59.272 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:59.530 02:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:19:59.530 02:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:19:59.787 true 00:19:59.787 02:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:19:59.787 02:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:00.044 02:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:00.302 02:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:20:00.302 02:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:20:00.302 true 00:20:00.302 02:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:20:00.302 02:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:01.234 Initializing NVMe Controllers 00:20:01.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:01.234 Controller IO queue size 128, less than required. 00:20:01.234 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:01.234 Controller IO queue size 128, less than required. 00:20:01.234 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:01.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:01.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:01.235 Initialization complete. Launching workers. 00:20:01.235 ======================================================== 00:20:01.235 Latency(us) 00:20:01.235 Device Information : IOPS MiB/s Average min max 00:20:01.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 769.10 0.38 75614.51 3129.93 1049995.42 00:20:01.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7705.83 3.76 16610.55 4285.84 531375.19 00:20:01.235 ======================================================== 00:20:01.235 Total : 8474.93 4.14 21965.16 3129.93 1049995.42 00:20:01.235 00:20:01.235 02:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:01.492 02:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:20:01.492 02:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:20:01.750 true 00:20:01.750 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1803341 00:20:01.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1803341) - No such process 00:20:01.750 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1803341 00:20:01.750 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:02.007 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:02.264 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:20:02.264 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:20:02.264 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:20:02.264 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:02.264 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:20:02.521 null0 00:20:02.521 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:02.521 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:02.521 02:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:20:02.779 null1 00:20:02.779 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:02.779 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:02.779 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:20:03.037 null2 00:20:03.037 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:03.037 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:03.037 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:20:03.295 null3 00:20:03.295 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:03.295 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:03.295 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:20:03.552 null4 00:20:03.552 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:03.552 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:03.552 02:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:20:03.810 null5 00:20:03.810 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:03.810 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:03.810 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:20:04.099 null6 00:20:04.099 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:04.099 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:04.099 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:20:04.358 null7 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1806609 1806610 1806612 1806614 1806616 1806618 1806620 1806622 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.358 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:04.616 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:04.616 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:04.616 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:04.616 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:04.616 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:04.616 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:04.616 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:04.616 02:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:04.910 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.911 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:04.911 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:04.911 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.911 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:04.911 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:04.911 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:04.911 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:05.169 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:05.169 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:05.169 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.169 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:05.169 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:05.169 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:05.169 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:05.169 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:05.427 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.428 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:05.686 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:05.686 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:05.686 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:05.686 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:05.686 02:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:05.686 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:05.686 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:05.686 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.944 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:05.945 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:05.945 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:05.945 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:06.202 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:06.202 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:06.202 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.202 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:06.202 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:06.202 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:06.202 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:06.202 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:06.460 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.460 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.460 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:06.460 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.460 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.460 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:06.460 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.461 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:06.719 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:06.719 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:06.719 02:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:06.719 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:06.719 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:06.977 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:07.236 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.495 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:07.752 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.752 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.752 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:07.752 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.752 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.752 02:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.752 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.010 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.011 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:08.011 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.011 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.011 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:08.269 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.527 02:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:08.785 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.043 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:20:09.301 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:20:09.302 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.559 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:20:09.559 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:20:09.560 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:20:09.560 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:20:09.560 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:09.560 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.560 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.560 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.560 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.560 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.560 02:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.817 rmmod nvme_tcp 00:20:09.817 rmmod nvme_fabrics 00:20:09.817 rmmod nvme_keyring 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.817 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1803014 ']' 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1803014 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1803014 ']' 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1803014 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1803014 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1803014' 00:20:09.818 killing process with pid 1803014 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1803014 00:20:09.818 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1803014 00:20:10.075 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.075 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.075 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.075 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.075 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.075 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.075 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.075 02:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.613 02:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.613 00:20:12.613 real 0m46.970s 00:20:12.613 user 3m38.321s 00:20:12.613 sys 0m15.150s 00:20:12.613 02:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.613 02:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:20:12.613 ************************************ 00:20:12.613 END TEST nvmf_ns_hotplug_stress 00:20:12.613 ************************************ 00:20:12.613 02:26:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:12.613 02:26:02 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:12.613 02:26:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:12.613 02:26:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.613 02:26:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:12.613 ************************************ 00:20:12.613 START TEST nvmf_connect_stress 00:20:12.613 ************************************ 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:12.613 * Looking for test storage... 00:20:12.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.613 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.614 02:26:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:13.991 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.991 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.991 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.991 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.991 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.991 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.991 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.991 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.991 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:13.992 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:13.992 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:13.992 Found net devices under 0000:08:00.0: cvl_0_0 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:13.992 Found net devices under 0000:08:00.1: cvl_0_1 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:20:13.992 00:20:13.992 --- 10.0.0.2 ping statistics --- 00:20:13.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.992 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:20:13.992 00:20:13.992 --- 10.0.0.1 ping statistics --- 00:20:13.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.992 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1808758 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1808758 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1808758 ']' 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.992 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:13.992 [2024-07-11 02:26:04.306068] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:13.992 [2024-07-11 02:26:04.306164] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.992 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.992 [2024-07-11 02:26:04.371471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.252 [2024-07-11 02:26:04.458508] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.252 [2024-07-11 02:26:04.458577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.252 [2024-07-11 02:26:04.458594] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.252 [2024-07-11 02:26:04.458609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.252 [2024-07-11 02:26:04.458622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.252 [2024-07-11 02:26:04.458701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.252 [2024-07-11 02:26:04.458784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.252 [2024-07-11 02:26:04.458817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.252 [2024-07-11 02:26:04.590268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.252 [2024-07-11 02:26:04.625671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.252 NULL1 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1808796 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.252 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.510 02:26:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.767 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.767 02:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:14.767 02:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:14.767 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.767 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.025 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.025 02:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:15.025 02:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:15.025 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.025 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.283 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.283 02:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:15.283 02:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:15.283 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.283 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.850 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.850 02:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:15.850 02:26:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:15.850 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.850 02:26:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.108 02:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.108 02:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:16.108 02:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.108 02:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.108 02:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.366 02:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.366 02:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:16.366 02:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.366 02:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.366 02:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.625 02:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.625 02:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:16.625 02:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.625 02:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.625 02:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.883 02:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.883 02:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:16.883 02:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.883 02:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.883 02:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.449 02:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.449 02:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:17.449 02:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.449 02:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.449 02:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.707 02:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.707 02:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:17.707 02:26:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.708 02:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.708 02:26:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.965 02:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.965 02:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:17.965 02:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.965 02:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.965 02:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.223 02:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.223 02:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:18.223 02:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.223 02:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.223 02:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.482 02:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.482 02:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:18.482 02:26:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.482 02:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.482 02:26:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.048 02:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.048 02:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:19.048 02:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.048 02:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.048 02:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.306 02:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.306 02:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:19.306 02:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.306 02:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.306 02:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.564 02:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.564 02:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:19.564 02:26:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.564 02:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.564 02:26:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.823 02:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.823 02:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:19.823 02:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.823 02:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.823 02:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:20.080 02:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.081 02:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:20.081 02:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.081 02:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.081 02:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:20.647 02:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.647 02:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:20.647 02:26:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.647 02:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.647 02:26:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:20.905 02:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.905 02:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:20.905 02:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:20.905 02:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.905 02:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:21.163 02:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.163 02:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:21.163 02:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.163 02:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.163 02:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:21.421 02:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.421 02:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:21.421 02:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.421 02:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.421 02:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:21.679 02:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.679 02:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:21.679 02:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:21.679 02:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.679 02:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.247 02:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.247 02:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:22.247 02:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.247 02:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.247 02:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.505 02:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.505 02:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:22.505 02:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.505 02:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.505 02:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.763 02:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.763 02:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:22.763 02:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:22.763 02:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.763 02:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:23.021 02:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.021 02:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:23.021 02:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.021 02:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.021 02:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:23.279 02:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.280 02:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:23.280 02:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.280 02:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.280 02:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:23.845 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.845 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:23.845 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:23.845 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.845 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.103 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.103 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:24.103 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.103 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.103 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.361 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.361 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:24.361 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:24.361 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.361 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:24.361 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1808796 00:20:24.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1808796) - No such process 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1808796 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.619 02:26:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.619 rmmod nvme_tcp 00:20:24.619 rmmod nvme_fabrics 00:20:24.619 rmmod nvme_keyring 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1808758 ']' 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1808758 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1808758 ']' 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1808758 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.619 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1808758 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1808758' 00:20:24.879 killing process with pid 1808758 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1808758 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1808758 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.879 02:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.417 02:26:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.417 00:20:27.417 real 0m14.785s 00:20:27.417 user 0m38.147s 00:20:27.417 sys 0m5.405s 00:20:27.417 02:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.417 02:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:27.417 ************************************ 00:20:27.417 END TEST nvmf_connect_stress 00:20:27.417 ************************************ 00:20:27.417 02:26:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:27.417 02:26:17 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:27.417 02:26:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:27.417 02:26:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.417 02:26:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:27.417 ************************************ 00:20:27.417 START TEST nvmf_fused_ordering 00:20:27.417 ************************************ 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:27.417 * Looking for test storage... 00:20:27.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.417 02:26:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:28.795 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:28.795 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:28.795 Found net devices under 0000:08:00.0: cvl_0_0 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:28.795 Found net devices under 0000:08:00.1: cvl_0_1 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.795 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:20:28.796 00:20:28.796 --- 10.0.0.2 ping statistics --- 00:20:28.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.796 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:20:28.796 00:20:28.796 --- 10.0.0.1 ping statistics --- 00:20:28.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.796 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1811208 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1811208 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1811208 ']' 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.796 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:29.054 [2024-07-11 02:26:19.261115] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:29.054 [2024-07-11 02:26:19.261207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.054 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.054 [2024-07-11 02:26:19.335699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.054 [2024-07-11 02:26:19.433182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.054 [2024-07-11 02:26:19.433249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.054 [2024-07-11 02:26:19.433273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.054 [2024-07-11 02:26:19.433294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.054 [2024-07-11 02:26:19.433314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.054 [2024-07-11 02:26:19.433351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.311 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.311 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:20:29.311 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.311 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.311 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:29.311 02:26:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.311 02:26:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.311 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.311 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:29.311 [2024-07-11 02:26:19.572489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:29.312 [2024-07-11 02:26:19.588662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:29.312 NULL1 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.312 02:26:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:29.312 [2024-07-11 02:26:19.635761] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:29.312 [2024-07-11 02:26:19.635813] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1811277 ] 00:20:29.312 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.876 Attached to nqn.2016-06.io.spdk:cnode1 00:20:29.876 Namespace ID: 1 size: 1GB 00:20:29.876 fused_ordering(0) 00:20:29.876 fused_ordering(1) 00:20:29.876 fused_ordering(2) 00:20:29.876 fused_ordering(3) 00:20:29.876 fused_ordering(4) 00:20:29.876 fused_ordering(5) 00:20:29.876 fused_ordering(6) 00:20:29.876 fused_ordering(7) 00:20:29.876 fused_ordering(8) 00:20:29.876 fused_ordering(9) 00:20:29.876 fused_ordering(10) 00:20:29.876 fused_ordering(11) 00:20:29.876 fused_ordering(12) 00:20:29.876 fused_ordering(13) 00:20:29.876 fused_ordering(14) 00:20:29.876 fused_ordering(15) 00:20:29.876 fused_ordering(16) 00:20:29.876 fused_ordering(17) 00:20:29.876 fused_ordering(18) 00:20:29.876 fused_ordering(19) 00:20:29.876 fused_ordering(20) 00:20:29.876 fused_ordering(21) 00:20:29.876 fused_ordering(22) 00:20:29.876 fused_ordering(23) 00:20:29.876 fused_ordering(24) 00:20:29.876 fused_ordering(25) 00:20:29.876 fused_ordering(26) 00:20:29.876 fused_ordering(27) 00:20:29.876 fused_ordering(28) 00:20:29.876 fused_ordering(29) 00:20:29.876 fused_ordering(30) 00:20:29.876 fused_ordering(31) 00:20:29.876 fused_ordering(32) 00:20:29.877 fused_ordering(33) 00:20:29.877 fused_ordering(34) 00:20:29.877 fused_ordering(35) 00:20:29.877 fused_ordering(36) 00:20:29.877 fused_ordering(37) 00:20:29.877 fused_ordering(38) 00:20:29.877 fused_ordering(39) 00:20:29.877 fused_ordering(40) 00:20:29.877 fused_ordering(41) 00:20:29.877 fused_ordering(42) 00:20:29.877 fused_ordering(43) 00:20:29.877 fused_ordering(44) 00:20:29.877 fused_ordering(45) 00:20:29.877 fused_ordering(46) 00:20:29.877 fused_ordering(47) 00:20:29.877 fused_ordering(48) 00:20:29.877 fused_ordering(49) 00:20:29.877 fused_ordering(50) 00:20:29.877 fused_ordering(51) 00:20:29.877 fused_ordering(52) 00:20:29.877 fused_ordering(53) 00:20:29.877 fused_ordering(54) 00:20:29.877 fused_ordering(55) 00:20:29.877 fused_ordering(56) 00:20:29.877 fused_ordering(57) 00:20:29.877 fused_ordering(58) 00:20:29.877 fused_ordering(59) 00:20:29.877 fused_ordering(60) 00:20:29.877 fused_ordering(61) 00:20:29.877 fused_ordering(62) 00:20:29.877 fused_ordering(63) 00:20:29.877 fused_ordering(64) 00:20:29.877 fused_ordering(65) 00:20:29.877 fused_ordering(66) 00:20:29.877 fused_ordering(67) 00:20:29.877 fused_ordering(68) 00:20:29.877 fused_ordering(69) 00:20:29.877 fused_ordering(70) 00:20:29.877 fused_ordering(71) 00:20:29.877 fused_ordering(72) 00:20:29.877 fused_ordering(73) 00:20:29.877 fused_ordering(74) 00:20:29.877 fused_ordering(75) 00:20:29.877 fused_ordering(76) 00:20:29.877 fused_ordering(77) 00:20:29.877 fused_ordering(78) 00:20:29.877 fused_ordering(79) 00:20:29.877 fused_ordering(80) 00:20:29.877 fused_ordering(81) 00:20:29.877 fused_ordering(82) 00:20:29.877 fused_ordering(83) 00:20:29.877 fused_ordering(84) 00:20:29.877 fused_ordering(85) 00:20:29.877 fused_ordering(86) 00:20:29.877 fused_ordering(87) 00:20:29.877 fused_ordering(88) 00:20:29.877 fused_ordering(89) 00:20:29.877 fused_ordering(90) 00:20:29.877 fused_ordering(91) 00:20:29.877 fused_ordering(92) 00:20:29.877 fused_ordering(93) 00:20:29.877 fused_ordering(94) 00:20:29.877 fused_ordering(95) 00:20:29.877 fused_ordering(96) 00:20:29.877 fused_ordering(97) 00:20:29.877 fused_ordering(98) 00:20:29.877 fused_ordering(99) 00:20:29.877 fused_ordering(100) 00:20:29.877 fused_ordering(101) 00:20:29.877 fused_ordering(102) 00:20:29.877 fused_ordering(103) 00:20:29.877 fused_ordering(104) 00:20:29.877 fused_ordering(105) 00:20:29.877 fused_ordering(106) 00:20:29.877 fused_ordering(107) 00:20:29.877 fused_ordering(108) 00:20:29.877 fused_ordering(109) 00:20:29.877 fused_ordering(110) 00:20:29.877 fused_ordering(111) 00:20:29.877 fused_ordering(112) 00:20:29.877 fused_ordering(113) 00:20:29.877 fused_ordering(114) 00:20:29.877 fused_ordering(115) 00:20:29.877 fused_ordering(116) 00:20:29.877 fused_ordering(117) 00:20:29.877 fused_ordering(118) 00:20:29.877 fused_ordering(119) 00:20:29.877 fused_ordering(120) 00:20:29.877 fused_ordering(121) 00:20:29.877 fused_ordering(122) 00:20:29.877 fused_ordering(123) 00:20:29.877 fused_ordering(124) 00:20:29.877 fused_ordering(125) 00:20:29.877 fused_ordering(126) 00:20:29.877 fused_ordering(127) 00:20:29.877 fused_ordering(128) 00:20:29.877 fused_ordering(129) 00:20:29.877 fused_ordering(130) 00:20:29.877 fused_ordering(131) 00:20:29.877 fused_ordering(132) 00:20:29.877 fused_ordering(133) 00:20:29.877 fused_ordering(134) 00:20:29.877 fused_ordering(135) 00:20:29.877 fused_ordering(136) 00:20:29.877 fused_ordering(137) 00:20:29.877 fused_ordering(138) 00:20:29.877 fused_ordering(139) 00:20:29.877 fused_ordering(140) 00:20:29.877 fused_ordering(141) 00:20:29.877 fused_ordering(142) 00:20:29.877 fused_ordering(143) 00:20:29.877 fused_ordering(144) 00:20:29.877 fused_ordering(145) 00:20:29.877 fused_ordering(146) 00:20:29.877 fused_ordering(147) 00:20:29.877 fused_ordering(148) 00:20:29.877 fused_ordering(149) 00:20:29.877 fused_ordering(150) 00:20:29.877 fused_ordering(151) 00:20:29.877 fused_ordering(152) 00:20:29.877 fused_ordering(153) 00:20:29.877 fused_ordering(154) 00:20:29.877 fused_ordering(155) 00:20:29.877 fused_ordering(156) 00:20:29.877 fused_ordering(157) 00:20:29.877 fused_ordering(158) 00:20:29.877 fused_ordering(159) 00:20:29.877 fused_ordering(160) 00:20:29.877 fused_ordering(161) 00:20:29.877 fused_ordering(162) 00:20:29.877 fused_ordering(163) 00:20:29.877 fused_ordering(164) 00:20:29.877 fused_ordering(165) 00:20:29.877 fused_ordering(166) 00:20:29.877 fused_ordering(167) 00:20:29.877 fused_ordering(168) 00:20:29.877 fused_ordering(169) 00:20:29.877 fused_ordering(170) 00:20:29.877 fused_ordering(171) 00:20:29.877 fused_ordering(172) 00:20:29.877 fused_ordering(173) 00:20:29.877 fused_ordering(174) 00:20:29.877 fused_ordering(175) 00:20:29.877 fused_ordering(176) 00:20:29.877 fused_ordering(177) 00:20:29.877 fused_ordering(178) 00:20:29.877 fused_ordering(179) 00:20:29.877 fused_ordering(180) 00:20:29.877 fused_ordering(181) 00:20:29.877 fused_ordering(182) 00:20:29.877 fused_ordering(183) 00:20:29.877 fused_ordering(184) 00:20:29.877 fused_ordering(185) 00:20:29.877 fused_ordering(186) 00:20:29.877 fused_ordering(187) 00:20:29.877 fused_ordering(188) 00:20:29.877 fused_ordering(189) 00:20:29.877 fused_ordering(190) 00:20:29.877 fused_ordering(191) 00:20:29.877 fused_ordering(192) 00:20:29.877 fused_ordering(193) 00:20:29.877 fused_ordering(194) 00:20:29.877 fused_ordering(195) 00:20:29.877 fused_ordering(196) 00:20:29.877 fused_ordering(197) 00:20:29.877 fused_ordering(198) 00:20:29.877 fused_ordering(199) 00:20:29.877 fused_ordering(200) 00:20:29.877 fused_ordering(201) 00:20:29.877 fused_ordering(202) 00:20:29.877 fused_ordering(203) 00:20:29.877 fused_ordering(204) 00:20:29.877 fused_ordering(205) 00:20:30.134 fused_ordering(206) 00:20:30.134 fused_ordering(207) 00:20:30.134 fused_ordering(208) 00:20:30.134 fused_ordering(209) 00:20:30.134 fused_ordering(210) 00:20:30.134 fused_ordering(211) 00:20:30.134 fused_ordering(212) 00:20:30.134 fused_ordering(213) 00:20:30.134 fused_ordering(214) 00:20:30.134 fused_ordering(215) 00:20:30.134 fused_ordering(216) 00:20:30.134 fused_ordering(217) 00:20:30.134 fused_ordering(218) 00:20:30.134 fused_ordering(219) 00:20:30.134 fused_ordering(220) 00:20:30.134 fused_ordering(221) 00:20:30.134 fused_ordering(222) 00:20:30.134 fused_ordering(223) 00:20:30.134 fused_ordering(224) 00:20:30.134 fused_ordering(225) 00:20:30.134 fused_ordering(226) 00:20:30.134 fused_ordering(227) 00:20:30.134 fused_ordering(228) 00:20:30.134 fused_ordering(229) 00:20:30.134 fused_ordering(230) 00:20:30.134 fused_ordering(231) 00:20:30.134 fused_ordering(232) 00:20:30.134 fused_ordering(233) 00:20:30.134 fused_ordering(234) 00:20:30.134 fused_ordering(235) 00:20:30.134 fused_ordering(236) 00:20:30.134 fused_ordering(237) 00:20:30.134 fused_ordering(238) 00:20:30.134 fused_ordering(239) 00:20:30.134 fused_ordering(240) 00:20:30.134 fused_ordering(241) 00:20:30.134 fused_ordering(242) 00:20:30.134 fused_ordering(243) 00:20:30.134 fused_ordering(244) 00:20:30.134 fused_ordering(245) 00:20:30.134 fused_ordering(246) 00:20:30.134 fused_ordering(247) 00:20:30.134 fused_ordering(248) 00:20:30.134 fused_ordering(249) 00:20:30.134 fused_ordering(250) 00:20:30.134 fused_ordering(251) 00:20:30.134 fused_ordering(252) 00:20:30.134 fused_ordering(253) 00:20:30.134 fused_ordering(254) 00:20:30.134 fused_ordering(255) 00:20:30.134 fused_ordering(256) 00:20:30.134 fused_ordering(257) 00:20:30.134 fused_ordering(258) 00:20:30.134 fused_ordering(259) 00:20:30.134 fused_ordering(260) 00:20:30.134 fused_ordering(261) 00:20:30.134 fused_ordering(262) 00:20:30.134 fused_ordering(263) 00:20:30.134 fused_ordering(264) 00:20:30.134 fused_ordering(265) 00:20:30.134 fused_ordering(266) 00:20:30.134 fused_ordering(267) 00:20:30.134 fused_ordering(268) 00:20:30.134 fused_ordering(269) 00:20:30.134 fused_ordering(270) 00:20:30.134 fused_ordering(271) 00:20:30.134 fused_ordering(272) 00:20:30.134 fused_ordering(273) 00:20:30.134 fused_ordering(274) 00:20:30.134 fused_ordering(275) 00:20:30.134 fused_ordering(276) 00:20:30.134 fused_ordering(277) 00:20:30.134 fused_ordering(278) 00:20:30.134 fused_ordering(279) 00:20:30.134 fused_ordering(280) 00:20:30.134 fused_ordering(281) 00:20:30.134 fused_ordering(282) 00:20:30.134 fused_ordering(283) 00:20:30.134 fused_ordering(284) 00:20:30.134 fused_ordering(285) 00:20:30.134 fused_ordering(286) 00:20:30.134 fused_ordering(287) 00:20:30.134 fused_ordering(288) 00:20:30.134 fused_ordering(289) 00:20:30.134 fused_ordering(290) 00:20:30.134 fused_ordering(291) 00:20:30.134 fused_ordering(292) 00:20:30.134 fused_ordering(293) 00:20:30.134 fused_ordering(294) 00:20:30.134 fused_ordering(295) 00:20:30.134 fused_ordering(296) 00:20:30.134 fused_ordering(297) 00:20:30.134 fused_ordering(298) 00:20:30.134 fused_ordering(299) 00:20:30.134 fused_ordering(300) 00:20:30.134 fused_ordering(301) 00:20:30.134 fused_ordering(302) 00:20:30.134 fused_ordering(303) 00:20:30.134 fused_ordering(304) 00:20:30.134 fused_ordering(305) 00:20:30.134 fused_ordering(306) 00:20:30.134 fused_ordering(307) 00:20:30.134 fused_ordering(308) 00:20:30.134 fused_ordering(309) 00:20:30.134 fused_ordering(310) 00:20:30.134 fused_ordering(311) 00:20:30.134 fused_ordering(312) 00:20:30.134 fused_ordering(313) 00:20:30.134 fused_ordering(314) 00:20:30.134 fused_ordering(315) 00:20:30.134 fused_ordering(316) 00:20:30.134 fused_ordering(317) 00:20:30.134 fused_ordering(318) 00:20:30.134 fused_ordering(319) 00:20:30.134 fused_ordering(320) 00:20:30.134 fused_ordering(321) 00:20:30.134 fused_ordering(322) 00:20:30.134 fused_ordering(323) 00:20:30.134 fused_ordering(324) 00:20:30.134 fused_ordering(325) 00:20:30.134 fused_ordering(326) 00:20:30.134 fused_ordering(327) 00:20:30.134 fused_ordering(328) 00:20:30.134 fused_ordering(329) 00:20:30.134 fused_ordering(330) 00:20:30.134 fused_ordering(331) 00:20:30.134 fused_ordering(332) 00:20:30.134 fused_ordering(333) 00:20:30.134 fused_ordering(334) 00:20:30.134 fused_ordering(335) 00:20:30.134 fused_ordering(336) 00:20:30.134 fused_ordering(337) 00:20:30.134 fused_ordering(338) 00:20:30.134 fused_ordering(339) 00:20:30.134 fused_ordering(340) 00:20:30.134 fused_ordering(341) 00:20:30.134 fused_ordering(342) 00:20:30.134 fused_ordering(343) 00:20:30.134 fused_ordering(344) 00:20:30.134 fused_ordering(345) 00:20:30.134 fused_ordering(346) 00:20:30.134 fused_ordering(347) 00:20:30.134 fused_ordering(348) 00:20:30.134 fused_ordering(349) 00:20:30.134 fused_ordering(350) 00:20:30.134 fused_ordering(351) 00:20:30.134 fused_ordering(352) 00:20:30.134 fused_ordering(353) 00:20:30.134 fused_ordering(354) 00:20:30.134 fused_ordering(355) 00:20:30.134 fused_ordering(356) 00:20:30.134 fused_ordering(357) 00:20:30.134 fused_ordering(358) 00:20:30.134 fused_ordering(359) 00:20:30.134 fused_ordering(360) 00:20:30.134 fused_ordering(361) 00:20:30.134 fused_ordering(362) 00:20:30.134 fused_ordering(363) 00:20:30.134 fused_ordering(364) 00:20:30.134 fused_ordering(365) 00:20:30.134 fused_ordering(366) 00:20:30.134 fused_ordering(367) 00:20:30.134 fused_ordering(368) 00:20:30.134 fused_ordering(369) 00:20:30.134 fused_ordering(370) 00:20:30.134 fused_ordering(371) 00:20:30.134 fused_ordering(372) 00:20:30.134 fused_ordering(373) 00:20:30.134 fused_ordering(374) 00:20:30.134 fused_ordering(375) 00:20:30.134 fused_ordering(376) 00:20:30.134 fused_ordering(377) 00:20:30.134 fused_ordering(378) 00:20:30.134 fused_ordering(379) 00:20:30.134 fused_ordering(380) 00:20:30.134 fused_ordering(381) 00:20:30.134 fused_ordering(382) 00:20:30.134 fused_ordering(383) 00:20:30.134 fused_ordering(384) 00:20:30.134 fused_ordering(385) 00:20:30.134 fused_ordering(386) 00:20:30.134 fused_ordering(387) 00:20:30.134 fused_ordering(388) 00:20:30.134 fused_ordering(389) 00:20:30.134 fused_ordering(390) 00:20:30.134 fused_ordering(391) 00:20:30.134 fused_ordering(392) 00:20:30.134 fused_ordering(393) 00:20:30.134 fused_ordering(394) 00:20:30.134 fused_ordering(395) 00:20:30.134 fused_ordering(396) 00:20:30.134 fused_ordering(397) 00:20:30.134 fused_ordering(398) 00:20:30.134 fused_ordering(399) 00:20:30.134 fused_ordering(400) 00:20:30.134 fused_ordering(401) 00:20:30.134 fused_ordering(402) 00:20:30.134 fused_ordering(403) 00:20:30.134 fused_ordering(404) 00:20:30.134 fused_ordering(405) 00:20:30.134 fused_ordering(406) 00:20:30.134 fused_ordering(407) 00:20:30.134 fused_ordering(408) 00:20:30.134 fused_ordering(409) 00:20:30.134 fused_ordering(410) 00:20:30.698 fused_ordering(411) 00:20:30.698 fused_ordering(412) 00:20:30.698 fused_ordering(413) 00:20:30.698 fused_ordering(414) 00:20:30.698 fused_ordering(415) 00:20:30.698 fused_ordering(416) 00:20:30.698 fused_ordering(417) 00:20:30.698 fused_ordering(418) 00:20:30.698 fused_ordering(419) 00:20:30.698 fused_ordering(420) 00:20:30.698 fused_ordering(421) 00:20:30.698 fused_ordering(422) 00:20:30.698 fused_ordering(423) 00:20:30.698 fused_ordering(424) 00:20:30.698 fused_ordering(425) 00:20:30.698 fused_ordering(426) 00:20:30.698 fused_ordering(427) 00:20:30.698 fused_ordering(428) 00:20:30.698 fused_ordering(429) 00:20:30.698 fused_ordering(430) 00:20:30.698 fused_ordering(431) 00:20:30.698 fused_ordering(432) 00:20:30.698 fused_ordering(433) 00:20:30.698 fused_ordering(434) 00:20:30.698 fused_ordering(435) 00:20:30.698 fused_ordering(436) 00:20:30.698 fused_ordering(437) 00:20:30.698 fused_ordering(438) 00:20:30.698 fused_ordering(439) 00:20:30.698 fused_ordering(440) 00:20:30.698 fused_ordering(441) 00:20:30.698 fused_ordering(442) 00:20:30.698 fused_ordering(443) 00:20:30.698 fused_ordering(444) 00:20:30.698 fused_ordering(445) 00:20:30.698 fused_ordering(446) 00:20:30.698 fused_ordering(447) 00:20:30.698 fused_ordering(448) 00:20:30.698 fused_ordering(449) 00:20:30.698 fused_ordering(450) 00:20:30.698 fused_ordering(451) 00:20:30.698 fused_ordering(452) 00:20:30.698 fused_ordering(453) 00:20:30.698 fused_ordering(454) 00:20:30.698 fused_ordering(455) 00:20:30.698 fused_ordering(456) 00:20:30.698 fused_ordering(457) 00:20:30.698 fused_ordering(458) 00:20:30.698 fused_ordering(459) 00:20:30.698 fused_ordering(460) 00:20:30.698 fused_ordering(461) 00:20:30.698 fused_ordering(462) 00:20:30.698 fused_ordering(463) 00:20:30.698 fused_ordering(464) 00:20:30.698 fused_ordering(465) 00:20:30.698 fused_ordering(466) 00:20:30.698 fused_ordering(467) 00:20:30.698 fused_ordering(468) 00:20:30.698 fused_ordering(469) 00:20:30.698 fused_ordering(470) 00:20:30.698 fused_ordering(471) 00:20:30.698 fused_ordering(472) 00:20:30.698 fused_ordering(473) 00:20:30.698 fused_ordering(474) 00:20:30.698 fused_ordering(475) 00:20:30.698 fused_ordering(476) 00:20:30.698 fused_ordering(477) 00:20:30.698 fused_ordering(478) 00:20:30.698 fused_ordering(479) 00:20:30.698 fused_ordering(480) 00:20:30.698 fused_ordering(481) 00:20:30.698 fused_ordering(482) 00:20:30.698 fused_ordering(483) 00:20:30.698 fused_ordering(484) 00:20:30.698 fused_ordering(485) 00:20:30.698 fused_ordering(486) 00:20:30.698 fused_ordering(487) 00:20:30.698 fused_ordering(488) 00:20:30.698 fused_ordering(489) 00:20:30.698 fused_ordering(490) 00:20:30.698 fused_ordering(491) 00:20:30.698 fused_ordering(492) 00:20:30.698 fused_ordering(493) 00:20:30.698 fused_ordering(494) 00:20:30.698 fused_ordering(495) 00:20:30.698 fused_ordering(496) 00:20:30.698 fused_ordering(497) 00:20:30.698 fused_ordering(498) 00:20:30.698 fused_ordering(499) 00:20:30.698 fused_ordering(500) 00:20:30.698 fused_ordering(501) 00:20:30.698 fused_ordering(502) 00:20:30.698 fused_ordering(503) 00:20:30.698 fused_ordering(504) 00:20:30.698 fused_ordering(505) 00:20:30.698 fused_ordering(506) 00:20:30.698 fused_ordering(507) 00:20:30.698 fused_ordering(508) 00:20:30.698 fused_ordering(509) 00:20:30.698 fused_ordering(510) 00:20:30.698 fused_ordering(511) 00:20:30.698 fused_ordering(512) 00:20:30.698 fused_ordering(513) 00:20:30.698 fused_ordering(514) 00:20:30.698 fused_ordering(515) 00:20:30.698 fused_ordering(516) 00:20:30.698 fused_ordering(517) 00:20:30.698 fused_ordering(518) 00:20:30.698 fused_ordering(519) 00:20:30.698 fused_ordering(520) 00:20:30.698 fused_ordering(521) 00:20:30.698 fused_ordering(522) 00:20:30.698 fused_ordering(523) 00:20:30.698 fused_ordering(524) 00:20:30.698 fused_ordering(525) 00:20:30.698 fused_ordering(526) 00:20:30.698 fused_ordering(527) 00:20:30.698 fused_ordering(528) 00:20:30.698 fused_ordering(529) 00:20:30.698 fused_ordering(530) 00:20:30.698 fused_ordering(531) 00:20:30.698 fused_ordering(532) 00:20:30.698 fused_ordering(533) 00:20:30.698 fused_ordering(534) 00:20:30.698 fused_ordering(535) 00:20:30.698 fused_ordering(536) 00:20:30.698 fused_ordering(537) 00:20:30.698 fused_ordering(538) 00:20:30.698 fused_ordering(539) 00:20:30.698 fused_ordering(540) 00:20:30.698 fused_ordering(541) 00:20:30.698 fused_ordering(542) 00:20:30.698 fused_ordering(543) 00:20:30.698 fused_ordering(544) 00:20:30.698 fused_ordering(545) 00:20:30.698 fused_ordering(546) 00:20:30.698 fused_ordering(547) 00:20:30.698 fused_ordering(548) 00:20:30.698 fused_ordering(549) 00:20:30.698 fused_ordering(550) 00:20:30.698 fused_ordering(551) 00:20:30.698 fused_ordering(552) 00:20:30.698 fused_ordering(553) 00:20:30.698 fused_ordering(554) 00:20:30.698 fused_ordering(555) 00:20:30.698 fused_ordering(556) 00:20:30.698 fused_ordering(557) 00:20:30.698 fused_ordering(558) 00:20:30.698 fused_ordering(559) 00:20:30.698 fused_ordering(560) 00:20:30.698 fused_ordering(561) 00:20:30.698 fused_ordering(562) 00:20:30.698 fused_ordering(563) 00:20:30.698 fused_ordering(564) 00:20:30.698 fused_ordering(565) 00:20:30.698 fused_ordering(566) 00:20:30.698 fused_ordering(567) 00:20:30.698 fused_ordering(568) 00:20:30.698 fused_ordering(569) 00:20:30.698 fused_ordering(570) 00:20:30.698 fused_ordering(571) 00:20:30.698 fused_ordering(572) 00:20:30.698 fused_ordering(573) 00:20:30.698 fused_ordering(574) 00:20:30.698 fused_ordering(575) 00:20:30.698 fused_ordering(576) 00:20:30.698 fused_ordering(577) 00:20:30.698 fused_ordering(578) 00:20:30.698 fused_ordering(579) 00:20:30.698 fused_ordering(580) 00:20:30.698 fused_ordering(581) 00:20:30.698 fused_ordering(582) 00:20:30.698 fused_ordering(583) 00:20:30.698 fused_ordering(584) 00:20:30.698 fused_ordering(585) 00:20:30.698 fused_ordering(586) 00:20:30.698 fused_ordering(587) 00:20:30.698 fused_ordering(588) 00:20:30.698 fused_ordering(589) 00:20:30.698 fused_ordering(590) 00:20:30.698 fused_ordering(591) 00:20:30.698 fused_ordering(592) 00:20:30.698 fused_ordering(593) 00:20:30.698 fused_ordering(594) 00:20:30.698 fused_ordering(595) 00:20:30.698 fused_ordering(596) 00:20:30.698 fused_ordering(597) 00:20:30.698 fused_ordering(598) 00:20:30.698 fused_ordering(599) 00:20:30.698 fused_ordering(600) 00:20:30.698 fused_ordering(601) 00:20:30.698 fused_ordering(602) 00:20:30.698 fused_ordering(603) 00:20:30.698 fused_ordering(604) 00:20:30.698 fused_ordering(605) 00:20:30.699 fused_ordering(606) 00:20:30.699 fused_ordering(607) 00:20:30.699 fused_ordering(608) 00:20:30.699 fused_ordering(609) 00:20:30.699 fused_ordering(610) 00:20:30.699 fused_ordering(611) 00:20:30.699 fused_ordering(612) 00:20:30.699 fused_ordering(613) 00:20:30.699 fused_ordering(614) 00:20:30.699 fused_ordering(615) 00:20:31.261 fused_ordering(616) 00:20:31.261 fused_ordering(617) 00:20:31.261 fused_ordering(618) 00:20:31.261 fused_ordering(619) 00:20:31.261 fused_ordering(620) 00:20:31.261 fused_ordering(621) 00:20:31.261 fused_ordering(622) 00:20:31.261 fused_ordering(623) 00:20:31.261 fused_ordering(624) 00:20:31.261 fused_ordering(625) 00:20:31.261 fused_ordering(626) 00:20:31.261 fused_ordering(627) 00:20:31.261 fused_ordering(628) 00:20:31.261 fused_ordering(629) 00:20:31.261 fused_ordering(630) 00:20:31.261 fused_ordering(631) 00:20:31.261 fused_ordering(632) 00:20:31.261 fused_ordering(633) 00:20:31.261 fused_ordering(634) 00:20:31.261 fused_ordering(635) 00:20:31.261 fused_ordering(636) 00:20:31.261 fused_ordering(637) 00:20:31.261 fused_ordering(638) 00:20:31.261 fused_ordering(639) 00:20:31.261 fused_ordering(640) 00:20:31.261 fused_ordering(641) 00:20:31.261 fused_ordering(642) 00:20:31.261 fused_ordering(643) 00:20:31.261 fused_ordering(644) 00:20:31.261 fused_ordering(645) 00:20:31.261 fused_ordering(646) 00:20:31.261 fused_ordering(647) 00:20:31.261 fused_ordering(648) 00:20:31.261 fused_ordering(649) 00:20:31.261 fused_ordering(650) 00:20:31.261 fused_ordering(651) 00:20:31.262 fused_ordering(652) 00:20:31.262 fused_ordering(653) 00:20:31.262 fused_ordering(654) 00:20:31.262 fused_ordering(655) 00:20:31.262 fused_ordering(656) 00:20:31.262 fused_ordering(657) 00:20:31.262 fused_ordering(658) 00:20:31.262 fused_ordering(659) 00:20:31.262 fused_ordering(660) 00:20:31.262 fused_ordering(661) 00:20:31.262 fused_ordering(662) 00:20:31.262 fused_ordering(663) 00:20:31.262 fused_ordering(664) 00:20:31.262 fused_ordering(665) 00:20:31.262 fused_ordering(666) 00:20:31.262 fused_ordering(667) 00:20:31.262 fused_ordering(668) 00:20:31.262 fused_ordering(669) 00:20:31.262 fused_ordering(670) 00:20:31.262 fused_ordering(671) 00:20:31.262 fused_ordering(672) 00:20:31.262 fused_ordering(673) 00:20:31.262 fused_ordering(674) 00:20:31.262 fused_ordering(675) 00:20:31.262 fused_ordering(676) 00:20:31.262 fused_ordering(677) 00:20:31.262 fused_ordering(678) 00:20:31.262 fused_ordering(679) 00:20:31.262 fused_ordering(680) 00:20:31.262 fused_ordering(681) 00:20:31.262 fused_ordering(682) 00:20:31.262 fused_ordering(683) 00:20:31.262 fused_ordering(684) 00:20:31.262 fused_ordering(685) 00:20:31.262 fused_ordering(686) 00:20:31.262 fused_ordering(687) 00:20:31.262 fused_ordering(688) 00:20:31.262 fused_ordering(689) 00:20:31.262 fused_ordering(690) 00:20:31.262 fused_ordering(691) 00:20:31.262 fused_ordering(692) 00:20:31.262 fused_ordering(693) 00:20:31.262 fused_ordering(694) 00:20:31.262 fused_ordering(695) 00:20:31.262 fused_ordering(696) 00:20:31.262 fused_ordering(697) 00:20:31.262 fused_ordering(698) 00:20:31.262 fused_ordering(699) 00:20:31.262 fused_ordering(700) 00:20:31.262 fused_ordering(701) 00:20:31.262 fused_ordering(702) 00:20:31.262 fused_ordering(703) 00:20:31.262 fused_ordering(704) 00:20:31.262 fused_ordering(705) 00:20:31.262 fused_ordering(706) 00:20:31.262 fused_ordering(707) 00:20:31.262 fused_ordering(708) 00:20:31.262 fused_ordering(709) 00:20:31.262 fused_ordering(710) 00:20:31.262 fused_ordering(711) 00:20:31.262 fused_ordering(712) 00:20:31.262 fused_ordering(713) 00:20:31.262 fused_ordering(714) 00:20:31.262 fused_ordering(715) 00:20:31.262 fused_ordering(716) 00:20:31.262 fused_ordering(717) 00:20:31.262 fused_ordering(718) 00:20:31.262 fused_ordering(719) 00:20:31.262 fused_ordering(720) 00:20:31.262 fused_ordering(721) 00:20:31.262 fused_ordering(722) 00:20:31.262 fused_ordering(723) 00:20:31.262 fused_ordering(724) 00:20:31.262 fused_ordering(725) 00:20:31.262 fused_ordering(726) 00:20:31.262 fused_ordering(727) 00:20:31.262 fused_ordering(728) 00:20:31.262 fused_ordering(729) 00:20:31.262 fused_ordering(730) 00:20:31.262 fused_ordering(731) 00:20:31.262 fused_ordering(732) 00:20:31.262 fused_ordering(733) 00:20:31.262 fused_ordering(734) 00:20:31.262 fused_ordering(735) 00:20:31.262 fused_ordering(736) 00:20:31.262 fused_ordering(737) 00:20:31.262 fused_ordering(738) 00:20:31.262 fused_ordering(739) 00:20:31.262 fused_ordering(740) 00:20:31.262 fused_ordering(741) 00:20:31.262 fused_ordering(742) 00:20:31.262 fused_ordering(743) 00:20:31.262 fused_ordering(744) 00:20:31.262 fused_ordering(745) 00:20:31.262 fused_ordering(746) 00:20:31.262 fused_ordering(747) 00:20:31.262 fused_ordering(748) 00:20:31.262 fused_ordering(749) 00:20:31.262 fused_ordering(750) 00:20:31.262 fused_ordering(751) 00:20:31.262 fused_ordering(752) 00:20:31.262 fused_ordering(753) 00:20:31.262 fused_ordering(754) 00:20:31.262 fused_ordering(755) 00:20:31.262 fused_ordering(756) 00:20:31.262 fused_ordering(757) 00:20:31.262 fused_ordering(758) 00:20:31.262 fused_ordering(759) 00:20:31.262 fused_ordering(760) 00:20:31.262 fused_ordering(761) 00:20:31.262 fused_ordering(762) 00:20:31.262 fused_ordering(763) 00:20:31.262 fused_ordering(764) 00:20:31.262 fused_ordering(765) 00:20:31.262 fused_ordering(766) 00:20:31.262 fused_ordering(767) 00:20:31.262 fused_ordering(768) 00:20:31.262 fused_ordering(769) 00:20:31.262 fused_ordering(770) 00:20:31.262 fused_ordering(771) 00:20:31.262 fused_ordering(772) 00:20:31.262 fused_ordering(773) 00:20:31.262 fused_ordering(774) 00:20:31.262 fused_ordering(775) 00:20:31.262 fused_ordering(776) 00:20:31.262 fused_ordering(777) 00:20:31.262 fused_ordering(778) 00:20:31.262 fused_ordering(779) 00:20:31.262 fused_ordering(780) 00:20:31.262 fused_ordering(781) 00:20:31.262 fused_ordering(782) 00:20:31.262 fused_ordering(783) 00:20:31.262 fused_ordering(784) 00:20:31.262 fused_ordering(785) 00:20:31.262 fused_ordering(786) 00:20:31.262 fused_ordering(787) 00:20:31.262 fused_ordering(788) 00:20:31.262 fused_ordering(789) 00:20:31.262 fused_ordering(790) 00:20:31.262 fused_ordering(791) 00:20:31.262 fused_ordering(792) 00:20:31.262 fused_ordering(793) 00:20:31.262 fused_ordering(794) 00:20:31.262 fused_ordering(795) 00:20:31.262 fused_ordering(796) 00:20:31.262 fused_ordering(797) 00:20:31.262 fused_ordering(798) 00:20:31.262 fused_ordering(799) 00:20:31.262 fused_ordering(800) 00:20:31.262 fused_ordering(801) 00:20:31.262 fused_ordering(802) 00:20:31.262 fused_ordering(803) 00:20:31.262 fused_ordering(804) 00:20:31.262 fused_ordering(805) 00:20:31.262 fused_ordering(806) 00:20:31.262 fused_ordering(807) 00:20:31.262 fused_ordering(808) 00:20:31.262 fused_ordering(809) 00:20:31.262 fused_ordering(810) 00:20:31.262 fused_ordering(811) 00:20:31.262 fused_ordering(812) 00:20:31.262 fused_ordering(813) 00:20:31.262 fused_ordering(814) 00:20:31.262 fused_ordering(815) 00:20:31.262 fused_ordering(816) 00:20:31.262 fused_ordering(817) 00:20:31.262 fused_ordering(818) 00:20:31.262 fused_ordering(819) 00:20:31.262 fused_ordering(820) 00:20:31.828 fused_ordering(821) 00:20:31.828 fused_ordering(822) 00:20:31.828 fused_ordering(823) 00:20:31.828 fused_ordering(824) 00:20:31.828 fused_ordering(825) 00:20:31.828 fused_ordering(826) 00:20:31.828 fused_ordering(827) 00:20:31.828 fused_ordering(828) 00:20:31.828 fused_ordering(829) 00:20:31.828 fused_ordering(830) 00:20:31.828 fused_ordering(831) 00:20:31.828 fused_ordering(832) 00:20:31.828 fused_ordering(833) 00:20:31.828 fused_ordering(834) 00:20:31.828 fused_ordering(835) 00:20:31.828 fused_ordering(836) 00:20:31.828 fused_ordering(837) 00:20:31.828 fused_ordering(838) 00:20:31.828 fused_ordering(839) 00:20:31.828 fused_ordering(840) 00:20:31.828 fused_ordering(841) 00:20:31.828 fused_ordering(842) 00:20:31.828 fused_ordering(843) 00:20:31.828 fused_ordering(844) 00:20:31.828 fused_ordering(845) 00:20:31.828 fused_ordering(846) 00:20:31.828 fused_ordering(847) 00:20:31.828 fused_ordering(848) 00:20:31.828 fused_ordering(849) 00:20:31.828 fused_ordering(850) 00:20:31.828 fused_ordering(851) 00:20:31.828 fused_ordering(852) 00:20:31.828 fused_ordering(853) 00:20:31.828 fused_ordering(854) 00:20:31.828 fused_ordering(855) 00:20:31.828 fused_ordering(856) 00:20:31.828 fused_ordering(857) 00:20:31.828 fused_ordering(858) 00:20:31.828 fused_ordering(859) 00:20:31.828 fused_ordering(860) 00:20:31.828 fused_ordering(861) 00:20:31.828 fused_ordering(862) 00:20:31.828 fused_ordering(863) 00:20:31.828 fused_ordering(864) 00:20:31.828 fused_ordering(865) 00:20:31.828 fused_ordering(866) 00:20:31.828 fused_ordering(867) 00:20:31.828 fused_ordering(868) 00:20:31.828 fused_ordering(869) 00:20:31.828 fused_ordering(870) 00:20:31.828 fused_ordering(871) 00:20:31.828 fused_ordering(872) 00:20:31.828 fused_ordering(873) 00:20:31.828 fused_ordering(874) 00:20:31.828 fused_ordering(875) 00:20:31.828 fused_ordering(876) 00:20:31.828 fused_ordering(877) 00:20:31.828 fused_ordering(878) 00:20:31.828 fused_ordering(879) 00:20:31.828 fused_ordering(880) 00:20:31.828 fused_ordering(881) 00:20:31.828 fused_ordering(882) 00:20:31.828 fused_ordering(883) 00:20:31.828 fused_ordering(884) 00:20:31.828 fused_ordering(885) 00:20:31.828 fused_ordering(886) 00:20:31.828 fused_ordering(887) 00:20:31.828 fused_ordering(888) 00:20:31.828 fused_ordering(889) 00:20:31.828 fused_ordering(890) 00:20:31.828 fused_ordering(891) 00:20:31.828 fused_ordering(892) 00:20:31.828 fused_ordering(893) 00:20:31.828 fused_ordering(894) 00:20:31.828 fused_ordering(895) 00:20:31.828 fused_ordering(896) 00:20:31.828 fused_ordering(897) 00:20:31.828 fused_ordering(898) 00:20:31.828 fused_ordering(899) 00:20:31.828 fused_ordering(900) 00:20:31.828 fused_ordering(901) 00:20:31.828 fused_ordering(902) 00:20:31.828 fused_ordering(903) 00:20:31.828 fused_ordering(904) 00:20:31.828 fused_ordering(905) 00:20:31.828 fused_ordering(906) 00:20:31.828 fused_ordering(907) 00:20:31.828 fused_ordering(908) 00:20:31.828 fused_ordering(909) 00:20:31.828 fused_ordering(910) 00:20:31.828 fused_ordering(911) 00:20:31.828 fused_ordering(912) 00:20:31.829 fused_ordering(913) 00:20:31.829 fused_ordering(914) 00:20:31.829 fused_ordering(915) 00:20:31.829 fused_ordering(916) 00:20:31.829 fused_ordering(917) 00:20:31.829 fused_ordering(918) 00:20:31.829 fused_ordering(919) 00:20:31.829 fused_ordering(920) 00:20:31.829 fused_ordering(921) 00:20:31.829 fused_ordering(922) 00:20:31.829 fused_ordering(923) 00:20:31.829 fused_ordering(924) 00:20:31.829 fused_ordering(925) 00:20:31.829 fused_ordering(926) 00:20:31.829 fused_ordering(927) 00:20:31.829 fused_ordering(928) 00:20:31.829 fused_ordering(929) 00:20:31.829 fused_ordering(930) 00:20:31.829 fused_ordering(931) 00:20:31.829 fused_ordering(932) 00:20:31.829 fused_ordering(933) 00:20:31.829 fused_ordering(934) 00:20:31.829 fused_ordering(935) 00:20:31.829 fused_ordering(936) 00:20:31.829 fused_ordering(937) 00:20:31.829 fused_ordering(938) 00:20:31.829 fused_ordering(939) 00:20:31.829 fused_ordering(940) 00:20:31.829 fused_ordering(941) 00:20:31.829 fused_ordering(942) 00:20:31.829 fused_ordering(943) 00:20:31.829 fused_ordering(944) 00:20:31.829 fused_ordering(945) 00:20:31.829 fused_ordering(946) 00:20:31.829 fused_ordering(947) 00:20:31.829 fused_ordering(948) 00:20:31.829 fused_ordering(949) 00:20:31.829 fused_ordering(950) 00:20:31.829 fused_ordering(951) 00:20:31.829 fused_ordering(952) 00:20:31.829 fused_ordering(953) 00:20:31.829 fused_ordering(954) 00:20:31.829 fused_ordering(955) 00:20:31.829 fused_ordering(956) 00:20:31.829 fused_ordering(957) 00:20:31.829 fused_ordering(958) 00:20:31.829 fused_ordering(959) 00:20:31.829 fused_ordering(960) 00:20:31.829 fused_ordering(961) 00:20:31.829 fused_ordering(962) 00:20:31.829 fused_ordering(963) 00:20:31.829 fused_ordering(964) 00:20:31.829 fused_ordering(965) 00:20:31.829 fused_ordering(966) 00:20:31.829 fused_ordering(967) 00:20:31.829 fused_ordering(968) 00:20:31.829 fused_ordering(969) 00:20:31.829 fused_ordering(970) 00:20:31.829 fused_ordering(971) 00:20:31.829 fused_ordering(972) 00:20:31.829 fused_ordering(973) 00:20:31.829 fused_ordering(974) 00:20:31.829 fused_ordering(975) 00:20:31.829 fused_ordering(976) 00:20:31.829 fused_ordering(977) 00:20:31.829 fused_ordering(978) 00:20:31.829 fused_ordering(979) 00:20:31.829 fused_ordering(980) 00:20:31.829 fused_ordering(981) 00:20:31.829 fused_ordering(982) 00:20:31.829 fused_ordering(983) 00:20:31.829 fused_ordering(984) 00:20:31.829 fused_ordering(985) 00:20:31.829 fused_ordering(986) 00:20:31.829 fused_ordering(987) 00:20:31.829 fused_ordering(988) 00:20:31.829 fused_ordering(989) 00:20:31.829 fused_ordering(990) 00:20:31.829 fused_ordering(991) 00:20:31.829 fused_ordering(992) 00:20:31.829 fused_ordering(993) 00:20:31.829 fused_ordering(994) 00:20:31.829 fused_ordering(995) 00:20:31.829 fused_ordering(996) 00:20:31.829 fused_ordering(997) 00:20:31.829 fused_ordering(998) 00:20:31.829 fused_ordering(999) 00:20:31.829 fused_ordering(1000) 00:20:31.829 fused_ordering(1001) 00:20:31.829 fused_ordering(1002) 00:20:31.829 fused_ordering(1003) 00:20:31.829 fused_ordering(1004) 00:20:31.829 fused_ordering(1005) 00:20:31.829 fused_ordering(1006) 00:20:31.829 fused_ordering(1007) 00:20:31.829 fused_ordering(1008) 00:20:31.829 fused_ordering(1009) 00:20:31.829 fused_ordering(1010) 00:20:31.829 fused_ordering(1011) 00:20:31.829 fused_ordering(1012) 00:20:31.829 fused_ordering(1013) 00:20:31.829 fused_ordering(1014) 00:20:31.829 fused_ordering(1015) 00:20:31.829 fused_ordering(1016) 00:20:31.829 fused_ordering(1017) 00:20:31.829 fused_ordering(1018) 00:20:31.829 fused_ordering(1019) 00:20:31.829 fused_ordering(1020) 00:20:31.829 fused_ordering(1021) 00:20:31.829 fused_ordering(1022) 00:20:31.829 fused_ordering(1023) 00:20:31.829 02:26:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:31.829 02:26:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:31.829 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:31.829 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:20:31.829 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.829 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:20:31.829 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.829 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.829 rmmod nvme_tcp 00:20:32.089 rmmod nvme_fabrics 00:20:32.089 rmmod nvme_keyring 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1811208 ']' 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1811208 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1811208 ']' 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1811208 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1811208 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1811208' 00:20:32.089 killing process with pid 1811208 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1811208 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1811208 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.089 02:26:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.624 02:26:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.624 00:20:34.624 real 0m7.195s 00:20:34.624 user 0m5.179s 00:20:34.624 sys 0m2.929s 00:20:34.624 02:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.624 02:26:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:34.624 ************************************ 00:20:34.624 END TEST nvmf_fused_ordering 00:20:34.624 ************************************ 00:20:34.624 02:26:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:34.624 02:26:24 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:34.624 02:26:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:34.624 02:26:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.624 02:26:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:34.624 ************************************ 00:20:34.624 START TEST nvmf_delete_subsystem 00:20:34.624 ************************************ 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:34.624 * Looking for test storage... 00:20:34.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.624 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.625 02:26:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:36.003 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:36.003 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:36.003 Found net devices under 0000:08:00.0: cvl_0_0 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.003 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:36.004 Found net devices under 0000:08:00.1: cvl_0_1 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:20:36.004 00:20:36.004 --- 10.0.0.2 ping statistics --- 00:20:36.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.004 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:20:36.004 00:20:36.004 --- 10.0.0.1 ping statistics --- 00:20:36.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.004 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1813017 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1813017 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1813017 ']' 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.004 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.004 [2024-07-11 02:26:26.391123] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:36.004 [2024-07-11 02:26:26.391222] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.262 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.262 [2024-07-11 02:26:26.456794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:36.262 [2024-07-11 02:26:26.547189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.262 [2024-07-11 02:26:26.547244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.262 [2024-07-11 02:26:26.547270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.262 [2024-07-11 02:26:26.547291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.262 [2024-07-11 02:26:26.547310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.262 [2024-07-11 02:26:26.547389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.262 [2024-07-11 02:26:26.547396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.262 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.262 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:20:36.262 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.262 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.262 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.542 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.542 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.542 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.542 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.542 [2024-07-11 02:26:26.700534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.542 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.542 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.543 [2024-07-11 02:26:26.717148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.543 NULL1 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.543 Delay0 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1813048 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:20:36.543 02:26:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:36.543 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.543 [2024-07-11 02:26:26.801649] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:38.461 02:26:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.461 02:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.461 02:26:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 [2024-07-11 02:26:28.972833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250880 is same with the state(5) to be set 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 starting I/O failed: -6 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 [2024-07-11 02:26:28.973841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8960000c00 is same with the state(5) to be set 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Write completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.720 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 Read completed with error (sct=0, sc=8) 00:20:38.721 [2024-07-11 02:26:28.974369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1251050 is same with the state(5) to be set 00:20:38.721 Write completed with error (sct=0, sc=8) 00:20:39.656 [2024-07-11 02:26:29.942972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125e990 is same with the state(5) to be set 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 [2024-07-11 02:26:29.972485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1251360 is same with the state(5) to be set 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 [2024-07-11 02:26:29.975570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f896000cfe0 is same with the state(5) to be set 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 [2024-07-11 02:26:29.975820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f896000d2f0 is same with the state(5) to be set 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Write completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 Read completed with error (sct=0, sc=8) 00:20:39.656 [2024-07-11 02:26:29.976405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1250d40 is same with the state(5) to be set 00:20:39.656 Initializing NVMe Controllers 00:20:39.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.656 Controller IO queue size 128, less than required. 00:20:39.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:39.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:39.656 Initialization complete. Launching workers. 00:20:39.656 ======================================================== 00:20:39.656 Latency(us) 00:20:39.656 Device Information : IOPS MiB/s Average min max 00:20:39.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.77 0.08 908975.65 1559.48 1012985.37 00:20:39.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.76 0.08 969911.84 596.60 2003011.29 00:20:39.656 ======================================================== 00:20:39.656 Total : 328.53 0.16 939535.79 596.60 2003011.29 00:20:39.656 00:20:39.656 [2024-07-11 02:26:29.977197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125e990 (9): Bad file descriptor 00:20:39.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:39.656 02:26:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.656 02:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:20:39.656 02:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1813048 00:20:39.656 02:26:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:20:40.223 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:20:40.223 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1813048 00:20:40.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1813048) - No such process 00:20:40.223 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1813048 00:20:40.223 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:20:40.223 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1813048 00:20:40.223 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:20:40.223 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1813048 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:40.224 [2024-07-11 02:26:30.501010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1813358 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1813358 00:20:40.224 02:26:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:40.224 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.224 [2024-07-11 02:26:30.564855] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:40.790 02:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:40.790 02:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1813358 00:20:40.790 02:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:41.355 02:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:41.356 02:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1813358 00:20:41.356 02:26:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:41.613 02:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:41.613 02:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1813358 00:20:41.613 02:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:42.180 02:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:42.180 02:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1813358 00:20:42.180 02:26:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:42.745 02:26:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:42.745 02:26:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1813358 00:20:42.745 02:26:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:43.308 02:26:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:43.308 02:26:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1813358 00:20:43.308 02:26:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:43.565 Initializing NVMe Controllers 00:20:43.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.565 Controller IO queue size 128, less than required. 00:20:43.565 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:43.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:43.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:43.565 Initialization complete. Launching workers. 00:20:43.565 ======================================================== 00:20:43.565 Latency(us) 00:20:43.565 Device Information : IOPS MiB/s Average min max 00:20:43.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004407.00 1000175.02 1043118.46 00:20:43.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005826.44 1000520.27 1015305.25 00:20:43.565 ======================================================== 00:20:43.565 Total : 256.00 0.12 1005116.72 1000175.02 1043118.46 00:20:43.565 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1813358 00:20:43.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1813358) - No such process 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1813358 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.823 rmmod nvme_tcp 00:20:43.823 rmmod nvme_fabrics 00:20:43.823 rmmod nvme_keyring 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1813017 ']' 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1813017 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1813017 ']' 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1813017 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1813017 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1813017' 00:20:43.823 killing process with pid 1813017 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1813017 00:20:43.823 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1813017 00:20:44.083 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:44.083 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:44.083 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:44.083 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:44.083 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:44.083 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.083 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.083 02:26:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.993 02:26:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.993 00:20:45.993 real 0m11.784s 00:20:45.993 user 0m27.591s 00:20:45.993 sys 0m2.761s 00:20:45.993 02:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:45.993 02:26:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:45.993 ************************************ 00:20:45.993 END TEST nvmf_delete_subsystem 00:20:45.993 ************************************ 00:20:45.993 02:26:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:45.993 02:26:36 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:20:45.993 02:26:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:45.993 02:26:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.993 02:26:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:45.993 ************************************ 00:20:45.993 START TEST nvmf_ns_masking 00:20:45.993 ************************************ 00:20:45.993 02:26:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:20:46.253 * Looking for test storage... 00:20:46.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ddfc9dff-5a5c-4ed0-858b-b2304ca520e9 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=adb0bd5a-092e-42b5-86a6-63195f22198c 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=01d074b6-007e-4b20-8dc2-c22d1da8ec22 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.253 02:26:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:47.631 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:47.631 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:47.631 Found net devices under 0000:08:00.0: cvl_0_0 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:47.631 Found net devices under 0000:08:00.1: cvl_0_1 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.631 02:26:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.631 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.631 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.631 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:47.631 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:47.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:47.890 00:20:47.890 --- 10.0.0.2 ping statistics --- 00:20:47.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.890 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:20:47.890 00:20:47.890 --- 10.0.0.1 ping statistics --- 00:20:47.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.890 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1815168 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1815168 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1815168 ']' 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.890 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:47.890 [2024-07-11 02:26:38.167829] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:47.890 [2024-07-11 02:26:38.167926] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.890 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.890 [2024-07-11 02:26:38.233152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.148 [2024-07-11 02:26:38.322722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.148 [2024-07-11 02:26:38.322790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.148 [2024-07-11 02:26:38.322807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.148 [2024-07-11 02:26:38.322821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.148 [2024-07-11 02:26:38.322834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.148 [2024-07-11 02:26:38.322872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.148 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.148 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:20:48.148 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:48.148 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:48.148 02:26:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:48.148 02:26:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.148 02:26:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:48.406 [2024-07-11 02:26:38.722705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.406 02:26:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:20:48.406 02:26:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:20:48.406 02:26:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:48.664 Malloc1 00:20:48.664 02:26:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:49.230 Malloc2 00:20:49.230 02:26:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:49.488 02:26:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:49.746 02:26:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.004 [2024-07-11 02:26:40.239638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.004 02:26:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:20:50.004 02:26:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 01d074b6-007e-4b20-8dc2-c22d1da8ec22 -a 10.0.0.2 -s 4420 -i 4 00:20:50.004 02:26:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:20:50.004 02:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:20:50.004 02:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:50.004 02:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:50.004 02:26:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:52.532 [ 0]:0x1 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6e132003f3444ff889016ef3f1a703b1 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6e132003f3444ff889016ef3f1a703b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:52.532 [ 0]:0x1 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6e132003f3444ff889016ef3f1a703b1 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6e132003f3444ff889016ef3f1a703b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:20:52.532 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:52.533 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:52.533 [ 1]:0x2 00:20:52.533 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:52.533 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:52.533 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=988acc87781c40859a98ab0cf30e45f7 00:20:52.533 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 988acc87781c40859a98ab0cf30e45f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:52.533 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:20:52.533 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:52.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:52.533 02:26:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:53.098 02:26:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:53.357 02:26:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:20:53.357 02:26:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 01d074b6-007e-4b20-8dc2-c22d1da8ec22 -a 10.0.0.2 -s 4420 -i 4 00:20:53.357 02:26:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:53.357 02:26:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:20:53.357 02:26:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:53.357 02:26:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:20:53.357 02:26:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:20:53.357 02:26:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:20:55.921 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:55.921 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:55.921 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:55.921 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:55.921 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:55.921 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:20:55.921 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:55.921 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:55.921 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:55.922 [ 0]:0x2 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=988acc87781c40859a98ab0cf30e45f7 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 988acc87781c40859a98ab0cf30e45f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:55.922 02:26:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:55.922 [ 0]:0x1 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6e132003f3444ff889016ef3f1a703b1 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6e132003f3444ff889016ef3f1a703b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:55.922 [ 1]:0x2 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=988acc87781c40859a98ab0cf30e45f7 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 988acc87781c40859a98ab0cf30e45f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:55.922 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:56.182 [ 0]:0x2 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:56.182 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:56.440 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=988acc87781c40859a98ab0cf30e45f7 00:20:56.440 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 988acc87781c40859a98ab0cf30e45f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:56.440 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:20:56.440 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:56.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:56.440 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:56.699 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:20:56.699 02:26:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 01d074b6-007e-4b20-8dc2-c22d1da8ec22 -a 10.0.0.2 -s 4420 -i 4 00:20:56.699 02:26:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:56.699 02:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:20:56.699 02:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:56.699 02:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:20:56.699 02:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:20:56.699 02:26:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:59.230 [ 0]:0x1 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6e132003f3444ff889016ef3f1a703b1 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6e132003f3444ff889016ef3f1a703b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:59.230 [ 1]:0x2 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=988acc87781c40859a98ab0cf30e45f7 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 988acc87781c40859a98ab0cf30e45f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:59.230 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:59.231 [ 0]:0x2 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=988acc87781c40859a98ab0cf30e45f7 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 988acc87781c40859a98ab0cf30e45f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:59.231 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:59.489 [2024-07-11 02:26:49.909162] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:59.748 request: 00:20:59.748 { 00:20:59.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.748 "nsid": 2, 00:20:59.748 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.748 "method": "nvmf_ns_remove_host", 00:20:59.748 "req_id": 1 00:20:59.748 } 00:20:59.748 Got JSON-RPC error response 00:20:59.748 response: 00:20:59.748 { 00:20:59.748 "code": -32602, 00:20:59.748 "message": "Invalid parameters" 00:20:59.748 } 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:59.748 [ 0]:0x2 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:59.748 02:26:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=988acc87781c40859a98ab0cf30e45f7 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 988acc87781c40859a98ab0cf30e45f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:59.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1816438 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1816438 /var/tmp/host.sock 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1816438 ']' 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:59.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.748 02:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:59.748 [2024-07-11 02:26:50.120474] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:20:59.748 [2024-07-11 02:26:50.120590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816438 ] 00:20:59.748 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.007 [2024-07-11 02:26:50.181597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.007 [2024-07-11 02:26:50.272374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.265 02:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.265 02:26:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:21:00.265 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:00.524 02:26:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:00.782 02:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ddfc9dff-5a5c-4ed0-858b-b2304ca520e9 00:21:00.782 02:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:21:00.782 02:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DDFC9DFF5A5C4ED0858BB2304CA520E9 -i 00:21:01.041 02:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid adb0bd5a-092e-42b5-86a6-63195f22198c 00:21:01.041 02:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:21:01.041 02:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g ADB0BD5A092E42B586A663195F22198C -i 00:21:01.298 02:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:01.557 02:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:21:01.815 02:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:01.815 02:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:02.073 nvme0n1 00:21:02.073 02:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:02.073 02:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:02.640 nvme1n2 00:21:02.640 02:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:21:02.640 02:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:21:02.640 02:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:02.640 02:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:21:02.640 02:26:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:21:02.898 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:21:02.898 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:21:02.898 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:21:02.898 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:21:03.156 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ddfc9dff-5a5c-4ed0-858b-b2304ca520e9 == \d\d\f\c\9\d\f\f\-\5\a\5\c\-\4\e\d\0\-\8\5\8\b\-\b\2\3\0\4\c\a\5\2\0\e\9 ]] 00:21:03.156 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:21:03.156 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:21:03.156 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ adb0bd5a-092e-42b5-86a6-63195f22198c == \a\d\b\0\b\d\5\a\-\0\9\2\e\-\4\2\b\5\-\8\6\a\6\-\6\3\1\9\5\f\2\2\1\9\8\c ]] 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1816438 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1816438 ']' 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1816438 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1816438 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1816438' 00:21:03.722 killing process with pid 1816438 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1816438 00:21:03.722 02:26:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1816438 00:21:03.722 02:26:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.289 rmmod nvme_tcp 00:21:04.289 rmmod nvme_fabrics 00:21:04.289 rmmod nvme_keyring 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1815168 ']' 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1815168 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1815168 ']' 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1815168 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1815168 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1815168' 00:21:04.289 killing process with pid 1815168 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1815168 00:21:04.289 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1815168 00:21:04.548 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:04.548 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.548 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.548 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.548 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.548 02:26:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.548 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.548 02:26:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.450 02:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:06.450 00:21:06.450 real 0m20.364s 00:21:06.450 user 0m27.765s 00:21:06.450 sys 0m3.640s 00:21:06.450 02:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:06.450 02:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:06.450 ************************************ 00:21:06.450 END TEST nvmf_ns_masking 00:21:06.450 ************************************ 00:21:06.450 02:26:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:06.450 02:26:56 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:21:06.450 02:26:56 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:21:06.450 02:26:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:06.450 02:26:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.450 02:26:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:06.450 ************************************ 00:21:06.450 START TEST nvmf_nvme_cli 00:21:06.450 ************************************ 00:21:06.450 02:26:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:21:06.450 * Looking for test storage... 00:21:06.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:06.450 02:26:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.708 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.709 02:26:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:08.609 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:08.609 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:08.609 Found net devices under 0000:08:00.0: cvl_0_0 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:08.609 Found net devices under 0000:08:00.1: cvl_0_1 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.609 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:08.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:21:08.610 00:21:08.610 --- 10.0.0.2 ping statistics --- 00:21:08.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.610 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:21:08.610 00:21:08.610 --- 10.0.0.1 ping statistics --- 00:21:08.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.610 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1818371 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1818371 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1818371 ']' 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.610 [2024-07-11 02:26:58.722587] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:21:08.610 [2024-07-11 02:26:58.722676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.610 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.610 [2024-07-11 02:26:58.787367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.610 [2024-07-11 02:26:58.876927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.610 [2024-07-11 02:26:58.876981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.610 [2024-07-11 02:26:58.876998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.610 [2024-07-11 02:26:58.877011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.610 [2024-07-11 02:26:58.877023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.610 [2024-07-11 02:26:58.877077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.610 [2024-07-11 02:26:58.879530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.610 [2024-07-11 02:26:58.879565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.610 [2024-07-11 02:26:58.879581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.610 02:26:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.610 02:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.610 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.610 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.610 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.610 [2024-07-11 02:26:59.021239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.610 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.610 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:08.610 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.868 Malloc0 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.868 Malloc1 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.868 [2024-07-11 02:26:59.098394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.868 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:21:08.868 00:21:08.868 Discovery Log Number of Records 2, Generation counter 2 00:21:08.868 =====Discovery Log Entry 0====== 00:21:08.868 trtype: tcp 00:21:08.868 adrfam: ipv4 00:21:08.868 subtype: current discovery subsystem 00:21:08.868 treq: not required 00:21:08.868 portid: 0 00:21:08.868 trsvcid: 4420 00:21:08.868 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:08.868 traddr: 10.0.0.2 00:21:08.868 eflags: explicit discovery connections, duplicate discovery information 00:21:08.868 sectype: none 00:21:08.868 =====Discovery Log Entry 1====== 00:21:08.869 trtype: tcp 00:21:08.869 adrfam: ipv4 00:21:08.869 subtype: nvme subsystem 00:21:08.869 treq: not required 00:21:08.869 portid: 0 00:21:08.869 trsvcid: 4420 00:21:08.869 subnqn: nqn.2016-06.io.spdk:cnode1 00:21:08.869 traddr: 10.0.0.2 00:21:08.869 eflags: none 00:21:08.869 sectype: none 00:21:08.869 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:21:08.869 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:21:08.869 02:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:08.869 02:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:08.869 02:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:08.869 02:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:08.869 02:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:08.869 02:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:08.869 02:26:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:09.126 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:21:09.126 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:09.385 02:26:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:09.385 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:21:09.385 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:09.385 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:21:09.385 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:21:09.385 02:26:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:21:11.911 /dev/nvme0n1 ]] 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:11.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.911 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.911 rmmod nvme_tcp 00:21:11.911 rmmod nvme_fabrics 00:21:12.171 rmmod nvme_keyring 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1818371 ']' 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1818371 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1818371 ']' 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1818371 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1818371 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1818371' 00:21:12.171 killing process with pid 1818371 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1818371 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1818371 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.171 02:27:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.709 02:27:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.709 00:21:14.709 real 0m7.817s 00:21:14.709 user 0m15.166s 00:21:14.709 sys 0m1.931s 00:21:14.709 02:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:14.709 02:27:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:14.709 ************************************ 00:21:14.709 END TEST nvmf_nvme_cli 00:21:14.709 ************************************ 00:21:14.709 02:27:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:14.710 02:27:04 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:21:14.710 02:27:04 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:21:14.710 02:27:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:14.710 02:27:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.710 02:27:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:14.710 ************************************ 00:21:14.710 START TEST nvmf_vfio_user 00:21:14.710 ************************************ 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:21:14.710 * Looking for test storage... 00:21:14.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1819130 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1819130' 00:21:14.710 Process pid: 1819130 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1819130 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1819130 ']' 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.710 02:27:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:14.710 [2024-07-11 02:27:04.812789] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:21:14.710 [2024-07-11 02:27:04.812894] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.710 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.710 [2024-07-11 02:27:04.874927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.710 [2024-07-11 02:27:04.962607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.710 [2024-07-11 02:27:04.962667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.710 [2024-07-11 02:27:04.962683] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.710 [2024-07-11 02:27:04.962697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.710 [2024-07-11 02:27:04.962717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.710 [2024-07-11 02:27:04.962774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.710 [2024-07-11 02:27:04.962826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.710 [2024-07-11 02:27:04.962874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.710 [2024-07-11 02:27:04.962877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.710 02:27:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.710 02:27:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:21:14.710 02:27:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:21:16.079 02:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:21:16.079 02:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:21:16.079 02:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:21:16.079 02:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:16.079 02:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:21:16.079 02:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:16.338 Malloc1 00:21:16.338 02:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:21:16.595 02:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:21:17.162 02:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:21:17.420 02:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:17.420 02:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:21:17.420 02:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:17.677 Malloc2 00:21:17.677 02:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:21:17.934 02:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:21:18.192 02:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:21:18.449 02:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:21:18.449 02:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:21:18.449 02:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:18.449 02:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:21:18.449 02:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:21:18.449 02:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:21:18.449 [2024-07-11 02:27:08.840621] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:21:18.449 [2024-07-11 02:27:08.840675] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819799 ] 00:21:18.449 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.709 [2024-07-11 02:27:08.883279] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:21:18.709 [2024-07-11 02:27:08.886116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:18.709 [2024-07-11 02:27:08.886146] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f456aabd000 00:21:18.709 [2024-07-11 02:27:08.887111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:18.709 [2024-07-11 02:27:08.888104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:18.709 [2024-07-11 02:27:08.889104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:18.709 [2024-07-11 02:27:08.890107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:18.709 [2024-07-11 02:27:08.891111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:18.709 [2024-07-11 02:27:08.892116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:18.709 [2024-07-11 02:27:08.893121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:18.709 [2024-07-11 02:27:08.894130] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:18.709 [2024-07-11 02:27:08.895137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:18.709 [2024-07-11 02:27:08.895163] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4569871000 00:21:18.709 [2024-07-11 02:27:08.896615] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:18.709 [2024-07-11 02:27:08.916165] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:21:18.709 [2024-07-11 02:27:08.916209] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:21:18.709 [2024-07-11 02:27:08.921289] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:21:18.709 [2024-07-11 02:27:08.921346] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:21:18.709 [2024-07-11 02:27:08.921446] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:21:18.709 [2024-07-11 02:27:08.921477] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:21:18.709 [2024-07-11 02:27:08.921489] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:21:18.709 [2024-07-11 02:27:08.922271] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:21:18.709 [2024-07-11 02:27:08.922291] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:21:18.709 [2024-07-11 02:27:08.922306] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:21:18.709 [2024-07-11 02:27:08.923273] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:21:18.709 [2024-07-11 02:27:08.923292] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:21:18.709 [2024-07-11 02:27:08.923307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:21:18.709 [2024-07-11 02:27:08.924280] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:21:18.709 [2024-07-11 02:27:08.924300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:18.709 [2024-07-11 02:27:08.925284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:21:18.709 [2024-07-11 02:27:08.925304] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:21:18.709 [2024-07-11 02:27:08.925315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:21:18.709 [2024-07-11 02:27:08.925328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:18.709 [2024-07-11 02:27:08.925440] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:21:18.709 [2024-07-11 02:27:08.925449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:18.709 [2024-07-11 02:27:08.925459] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:21:18.709 [2024-07-11 02:27:08.926288] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:21:18.709 [2024-07-11 02:27:08.927291] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:21:18.709 [2024-07-11 02:27:08.928294] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:21:18.709 [2024-07-11 02:27:08.929292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:18.709 [2024-07-11 02:27:08.929427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:18.709 [2024-07-11 02:27:08.930310] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:21:18.709 [2024-07-11 02:27:08.930328] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:18.709 [2024-07-11 02:27:08.930339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:21:18.709 [2024-07-11 02:27:08.930367] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:21:18.709 [2024-07-11 02:27:08.930387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:21:18.709 [2024-07-11 02:27:08.930413] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:18.709 [2024-07-11 02:27:08.930424] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:18.709 [2024-07-11 02:27:08.930445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:18.709 [2024-07-11 02:27:08.930536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:21:18.709 [2024-07-11 02:27:08.930556] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:21:18.709 [2024-07-11 02:27:08.930570] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:21:18.709 [2024-07-11 02:27:08.930580] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:21:18.709 [2024-07-11 02:27:08.930589] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:21:18.709 [2024-07-11 02:27:08.930598] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:21:18.709 [2024-07-11 02:27:08.930608] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:21:18.709 [2024-07-11 02:27:08.930617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:21:18.709 [2024-07-11 02:27:08.930632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:21:18.709 [2024-07-11 02:27:08.930649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:21:18.709 [2024-07-11 02:27:08.930669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:21:18.709 [2024-07-11 02:27:08.930695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.709 [2024-07-11 02:27:08.930711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.710 [2024-07-11 02:27:08.930726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.710 [2024-07-11 02:27:08.930740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.710 [2024-07-11 02:27:08.930750] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.930767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.930783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.930797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.930809] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:21:18.710 [2024-07-11 02:27:08.930819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.930832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.930843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.930858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.930872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.930948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.930965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.930980] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:21:18.710 [2024-07-11 02:27:08.930990] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:21:18.710 [2024-07-11 02:27:08.931002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931036] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:21:18.710 [2024-07-11 02:27:08.931053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931083] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:18.710 [2024-07-11 02:27:08.931092] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:18.710 [2024-07-11 02:27:08.931104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931185] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:18.710 [2024-07-11 02:27:08.931195] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:18.710 [2024-07-11 02:27:08.931205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931310] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:21:18.710 [2024-07-11 02:27:08.931320] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:21:18.710 [2024-07-11 02:27:08.931330] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:21:18.710 [2024-07-11 02:27:08.931358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931502] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:21:18.710 [2024-07-11 02:27:08.931522] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:21:18.710 [2024-07-11 02:27:08.931531] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:21:18.710 [2024-07-11 02:27:08.931538] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:21:18.710 [2024-07-11 02:27:08.931549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:21:18.710 [2024-07-11 02:27:08.931563] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:21:18.710 [2024-07-11 02:27:08.931572] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:21:18.710 [2024-07-11 02:27:08.931583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931596] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:21:18.710 [2024-07-11 02:27:08.931606] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:18.710 [2024-07-11 02:27:08.931617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931631] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:21:18.710 [2024-07-11 02:27:08.931640] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:21:18.710 [2024-07-11 02:27:08.931650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:21:18.710 [2024-07-11 02:27:08.931664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:21:18.710 [2024-07-11 02:27:08.931722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:21:18.710 ===================================================== 00:21:18.710 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:18.710 ===================================================== 00:21:18.710 Controller Capabilities/Features 00:21:18.710 ================================ 00:21:18.710 Vendor ID: 4e58 00:21:18.710 Subsystem Vendor ID: 4e58 00:21:18.710 Serial Number: SPDK1 00:21:18.710 Model Number: SPDK bdev Controller 00:21:18.710 Firmware Version: 24.09 00:21:18.710 Recommended Arb Burst: 6 00:21:18.710 IEEE OUI Identifier: 8d 6b 50 00:21:18.710 Multi-path I/O 00:21:18.710 May have multiple subsystem ports: Yes 00:21:18.710 May have multiple controllers: Yes 00:21:18.710 Associated with SR-IOV VF: No 00:21:18.710 Max Data Transfer Size: 131072 00:21:18.710 Max Number of Namespaces: 32 00:21:18.710 Max Number of I/O Queues: 127 00:21:18.710 NVMe Specification Version (VS): 1.3 00:21:18.710 NVMe Specification Version (Identify): 1.3 00:21:18.710 Maximum Queue Entries: 256 00:21:18.710 Contiguous Queues Required: Yes 00:21:18.710 Arbitration Mechanisms Supported 00:21:18.711 Weighted Round Robin: Not Supported 00:21:18.711 Vendor Specific: Not Supported 00:21:18.711 Reset Timeout: 15000 ms 00:21:18.711 Doorbell Stride: 4 bytes 00:21:18.711 NVM Subsystem Reset: Not Supported 00:21:18.711 Command Sets Supported 00:21:18.711 NVM Command Set: Supported 00:21:18.711 Boot Partition: Not Supported 00:21:18.711 Memory Page Size Minimum: 4096 bytes 00:21:18.711 Memory Page Size Maximum: 4096 bytes 00:21:18.711 Persistent Memory Region: Not Supported 00:21:18.711 Optional Asynchronous Events Supported 00:21:18.711 Namespace Attribute Notices: Supported 00:21:18.711 Firmware Activation Notices: Not Supported 00:21:18.711 ANA Change Notices: Not Supported 00:21:18.711 PLE Aggregate Log Change Notices: Not Supported 00:21:18.711 LBA Status Info Alert Notices: Not Supported 00:21:18.711 EGE Aggregate Log Change Notices: Not Supported 00:21:18.711 Normal NVM Subsystem Shutdown event: Not Supported 00:21:18.711 Zone Descriptor Change Notices: Not Supported 00:21:18.711 Discovery Log Change Notices: Not Supported 00:21:18.711 Controller Attributes 00:21:18.711 128-bit Host Identifier: Supported 00:21:18.711 Non-Operational Permissive Mode: Not Supported 00:21:18.711 NVM Sets: Not Supported 00:21:18.711 Read Recovery Levels: Not Supported 00:21:18.711 Endurance Groups: Not Supported 00:21:18.711 Predictable Latency Mode: Not Supported 00:21:18.711 Traffic Based Keep ALive: Not Supported 00:21:18.711 Namespace Granularity: Not Supported 00:21:18.711 SQ Associations: Not Supported 00:21:18.711 UUID List: Not Supported 00:21:18.711 Multi-Domain Subsystem: Not Supported 00:21:18.711 Fixed Capacity Management: Not Supported 00:21:18.711 Variable Capacity Management: Not Supported 00:21:18.711 Delete Endurance Group: Not Supported 00:21:18.711 Delete NVM Set: Not Supported 00:21:18.711 Extended LBA Formats Supported: Not Supported 00:21:18.711 Flexible Data Placement Supported: Not Supported 00:21:18.711 00:21:18.711 Controller Memory Buffer Support 00:21:18.711 ================================ 00:21:18.711 Supported: No 00:21:18.711 00:21:18.711 Persistent Memory Region Support 00:21:18.711 ================================ 00:21:18.711 Supported: No 00:21:18.711 00:21:18.711 Admin Command Set Attributes 00:21:18.711 ============================ 00:21:18.711 Security Send/Receive: Not Supported 00:21:18.711 Format NVM: Not Supported 00:21:18.711 Firmware Activate/Download: Not Supported 00:21:18.711 Namespace Management: Not Supported 00:21:18.711 Device Self-Test: Not Supported 00:21:18.711 Directives: Not Supported 00:21:18.711 NVMe-MI: Not Supported 00:21:18.711 Virtualization Management: Not Supported 00:21:18.711 Doorbell Buffer Config: Not Supported 00:21:18.711 Get LBA Status Capability: Not Supported 00:21:18.711 Command & Feature Lockdown Capability: Not Supported 00:21:18.711 Abort Command Limit: 4 00:21:18.711 Async Event Request Limit: 4 00:21:18.711 Number of Firmware Slots: N/A 00:21:18.711 Firmware Slot 1 Read-Only: N/A 00:21:18.711 Firmware Activation Without Reset: N/A 00:21:18.711 Multiple Update Detection Support: N/A 00:21:18.711 Firmware Update Granularity: No Information Provided 00:21:18.711 Per-Namespace SMART Log: No 00:21:18.711 Asymmetric Namespace Access Log Page: Not Supported 00:21:18.711 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:21:18.711 Command Effects Log Page: Supported 00:21:18.711 Get Log Page Extended Data: Supported 00:21:18.711 Telemetry Log Pages: Not Supported 00:21:18.711 Persistent Event Log Pages: Not Supported 00:21:18.711 Supported Log Pages Log Page: May Support 00:21:18.711 Commands Supported & Effects Log Page: Not Supported 00:21:18.711 Feature Identifiers & Effects Log Page:May Support 00:21:18.711 NVMe-MI Commands & Effects Log Page: May Support 00:21:18.711 Data Area 4 for Telemetry Log: Not Supported 00:21:18.711 Error Log Page Entries Supported: 128 00:21:18.711 Keep Alive: Supported 00:21:18.711 Keep Alive Granularity: 10000 ms 00:21:18.711 00:21:18.711 NVM Command Set Attributes 00:21:18.711 ========================== 00:21:18.711 Submission Queue Entry Size 00:21:18.711 Max: 64 00:21:18.711 Min: 64 00:21:18.711 Completion Queue Entry Size 00:21:18.711 Max: 16 00:21:18.711 Min: 16 00:21:18.711 Number of Namespaces: 32 00:21:18.711 Compare Command: Supported 00:21:18.711 Write Uncorrectable Command: Not Supported 00:21:18.711 Dataset Management Command: Supported 00:21:18.711 Write Zeroes Command: Supported 00:21:18.711 Set Features Save Field: Not Supported 00:21:18.711 Reservations: Not Supported 00:21:18.711 Timestamp: Not Supported 00:21:18.711 Copy: Supported 00:21:18.711 Volatile Write Cache: Present 00:21:18.711 Atomic Write Unit (Normal): 1 00:21:18.711 Atomic Write Unit (PFail): 1 00:21:18.711 Atomic Compare & Write Unit: 1 00:21:18.711 Fused Compare & Write: Supported 00:21:18.711 Scatter-Gather List 00:21:18.711 SGL Command Set: Supported (Dword aligned) 00:21:18.711 SGL Keyed: Not Supported 00:21:18.711 SGL Bit Bucket Descriptor: Not Supported 00:21:18.711 SGL Metadata Pointer: Not Supported 00:21:18.711 Oversized SGL: Not Supported 00:21:18.711 SGL Metadata Address: Not Supported 00:21:18.711 SGL Offset: Not Supported 00:21:18.711 Transport SGL Data Block: Not Supported 00:21:18.711 Replay Protected Memory Block: Not Supported 00:21:18.711 00:21:18.711 Firmware Slot Information 00:21:18.711 ========================= 00:21:18.711 Active slot: 1 00:21:18.711 Slot 1 Firmware Revision: 24.09 00:21:18.711 00:21:18.711 00:21:18.711 Commands Supported and Effects 00:21:18.711 ============================== 00:21:18.711 Admin Commands 00:21:18.711 -------------- 00:21:18.711 Get Log Page (02h): Supported 00:21:18.711 Identify (06h): Supported 00:21:18.711 Abort (08h): Supported 00:21:18.711 Set Features (09h): Supported 00:21:18.711 Get Features (0Ah): Supported 00:21:18.711 Asynchronous Event Request (0Ch): Supported 00:21:18.711 Keep Alive (18h): Supported 00:21:18.711 I/O Commands 00:21:18.711 ------------ 00:21:18.711 Flush (00h): Supported LBA-Change 00:21:18.711 Write (01h): Supported LBA-Change 00:21:18.711 Read (02h): Supported 00:21:18.711 Compare (05h): Supported 00:21:18.711 Write Zeroes (08h): Supported LBA-Change 00:21:18.711 Dataset Management (09h): Supported LBA-Change 00:21:18.711 Copy (19h): Supported LBA-Change 00:21:18.711 00:21:18.711 Error Log 00:21:18.711 ========= 00:21:18.711 00:21:18.711 Arbitration 00:21:18.711 =========== 00:21:18.711 Arbitration Burst: 1 00:21:18.711 00:21:18.711 Power Management 00:21:18.711 ================ 00:21:18.711 Number of Power States: 1 00:21:18.711 Current Power State: Power State #0 00:21:18.711 Power State #0: 00:21:18.711 Max Power: 0.00 W 00:21:18.711 Non-Operational State: Operational 00:21:18.711 Entry Latency: Not Reported 00:21:18.711 Exit Latency: Not Reported 00:21:18.711 Relative Read Throughput: 0 00:21:18.711 Relative Read Latency: 0 00:21:18.711 Relative Write Throughput: 0 00:21:18.711 Relative Write Latency: 0 00:21:18.711 Idle Power: Not Reported 00:21:18.711 Active Power: Not Reported 00:21:18.711 Non-Operational Permissive Mode: Not Supported 00:21:18.711 00:21:18.711 Health Information 00:21:18.711 ================== 00:21:18.711 Critical Warnings: 00:21:18.711 Available Spare Space: OK 00:21:18.711 Temperature: OK 00:21:18.711 Device Reliability: OK 00:21:18.711 Read Only: No 00:21:18.711 Volatile Memory Backup: OK 00:21:18.711 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:18.711 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:18.711 Available Spare: 0% 00:21:18.711 Available Sp[2024-07-11 02:27:08.931866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:21:18.711 [2024-07-11 02:27:08.931884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:21:18.711 [2024-07-11 02:27:08.931933] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:21:18.711 [2024-07-11 02:27:08.931952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.711 [2024-07-11 02:27:08.931965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.711 [2024-07-11 02:27:08.931977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.711 [2024-07-11 02:27:08.931989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.711 [2024-07-11 02:27:08.935523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:21:18.711 [2024-07-11 02:27:08.935547] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:21:18.712 [2024-07-11 02:27:08.936341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:18.712 [2024-07-11 02:27:08.936442] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:21:18.712 [2024-07-11 02:27:08.936458] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:21:18.712 [2024-07-11 02:27:08.937349] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:21:18.712 [2024-07-11 02:27:08.937373] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:21:18.712 [2024-07-11 02:27:08.937446] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:21:18.712 [2024-07-11 02:27:08.939389] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:18.712 are Threshold: 0% 00:21:18.712 Life Percentage Used: 0% 00:21:18.712 Data Units Read: 0 00:21:18.712 Data Units Written: 0 00:21:18.712 Host Read Commands: 0 00:21:18.712 Host Write Commands: 0 00:21:18.712 Controller Busy Time: 0 minutes 00:21:18.712 Power Cycles: 0 00:21:18.712 Power On Hours: 0 hours 00:21:18.712 Unsafe Shutdowns: 0 00:21:18.712 Unrecoverable Media Errors: 0 00:21:18.712 Lifetime Error Log Entries: 0 00:21:18.712 Warning Temperature Time: 0 minutes 00:21:18.712 Critical Temperature Time: 0 minutes 00:21:18.712 00:21:18.712 Number of Queues 00:21:18.712 ================ 00:21:18.712 Number of I/O Submission Queues: 127 00:21:18.712 Number of I/O Completion Queues: 127 00:21:18.712 00:21:18.712 Active Namespaces 00:21:18.712 ================= 00:21:18.712 Namespace ID:1 00:21:18.712 Error Recovery Timeout: Unlimited 00:21:18.712 Command Set Identifier: NVM (00h) 00:21:18.712 Deallocate: Supported 00:21:18.712 Deallocated/Unwritten Error: Not Supported 00:21:18.712 Deallocated Read Value: Unknown 00:21:18.712 Deallocate in Write Zeroes: Not Supported 00:21:18.712 Deallocated Guard Field: 0xFFFF 00:21:18.712 Flush: Supported 00:21:18.712 Reservation: Supported 00:21:18.712 Namespace Sharing Capabilities: Multiple Controllers 00:21:18.712 Size (in LBAs): 131072 (0GiB) 00:21:18.712 Capacity (in LBAs): 131072 (0GiB) 00:21:18.712 Utilization (in LBAs): 131072 (0GiB) 00:21:18.712 NGUID: 1636D2B0D3E04F90BDC1F1882697AB47 00:21:18.712 UUID: 1636d2b0-d3e0-4f90-bdc1-f1882697ab47 00:21:18.712 Thin Provisioning: Not Supported 00:21:18.712 Per-NS Atomic Units: Yes 00:21:18.712 Atomic Boundary Size (Normal): 0 00:21:18.712 Atomic Boundary Size (PFail): 0 00:21:18.712 Atomic Boundary Offset: 0 00:21:18.712 Maximum Single Source Range Length: 65535 00:21:18.712 Maximum Copy Length: 65535 00:21:18.712 Maximum Source Range Count: 1 00:21:18.712 NGUID/EUI64 Never Reused: No 00:21:18.712 Namespace Write Protected: No 00:21:18.712 Number of LBA Formats: 1 00:21:18.712 Current LBA Format: LBA Format #00 00:21:18.712 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:18.712 00:21:18.712 02:27:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:21:18.712 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.970 [2024-07-11 02:27:09.162470] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:24.334 Initializing NVMe Controllers 00:21:24.334 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:24.334 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:21:24.334 Initialization complete. Launching workers. 00:21:24.334 ======================================================== 00:21:24.334 Latency(us) 00:21:24.334 Device Information : IOPS MiB/s Average min max 00:21:24.334 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24071.40 94.03 5320.24 1490.58 11546.87 00:21:24.334 ======================================================== 00:21:24.334 Total : 24071.40 94.03 5320.24 1490.58 11546.87 00:21:24.334 00:21:24.334 [2024-07-11 02:27:14.185638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:24.334 02:27:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:21:24.334 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.334 [2024-07-11 02:27:14.408815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:29.616 Initializing NVMe Controllers 00:21:29.616 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:29.617 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:21:29.617 Initialization complete. Launching workers. 00:21:29.617 ======================================================== 00:21:29.617 Latency(us) 00:21:29.617 Device Information : IOPS MiB/s Average min max 00:21:29.617 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16023.13 62.59 7993.52 6959.55 15326.61 00:21:29.617 ======================================================== 00:21:29.617 Total : 16023.13 62.59 7993.52 6959.55 15326.61 00:21:29.617 00:21:29.617 [2024-07-11 02:27:19.450676] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:29.617 02:27:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:21:29.617 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.617 [2024-07-11 02:27:19.671821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:34.887 [2024-07-11 02:27:24.756897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:34.888 Initializing NVMe Controllers 00:21:34.888 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:34.888 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:34.888 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:21:34.888 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:21:34.888 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:21:34.888 Initialization complete. Launching workers. 00:21:34.888 Starting thread on core 2 00:21:34.888 Starting thread on core 3 00:21:34.888 Starting thread on core 1 00:21:34.888 02:27:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:21:34.888 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.888 [2024-07-11 02:27:25.048017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:38.180 [2024-07-11 02:27:28.104312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:38.180 Initializing NVMe Controllers 00:21:38.180 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:38.180 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:38.180 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:21:38.180 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:21:38.180 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:21:38.180 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:21:38.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:21:38.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:21:38.180 Initialization complete. Launching workers. 00:21:38.180 Starting thread on core 1 with urgent priority queue 00:21:38.180 Starting thread on core 2 with urgent priority queue 00:21:38.180 Starting thread on core 3 with urgent priority queue 00:21:38.180 Starting thread on core 0 with urgent priority queue 00:21:38.180 SPDK bdev Controller (SPDK1 ) core 0: 6729.67 IO/s 14.86 secs/100000 ios 00:21:38.180 SPDK bdev Controller (SPDK1 ) core 1: 7200.33 IO/s 13.89 secs/100000 ios 00:21:38.180 SPDK bdev Controller (SPDK1 ) core 2: 6964.67 IO/s 14.36 secs/100000 ios 00:21:38.180 SPDK bdev Controller (SPDK1 ) core 3: 8334.00 IO/s 12.00 secs/100000 ios 00:21:38.180 ======================================================== 00:21:38.180 00:21:38.180 02:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:21:38.180 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.180 [2024-07-11 02:27:28.387128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:38.180 Initializing NVMe Controllers 00:21:38.180 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:38.180 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:38.180 Namespace ID: 1 size: 0GB 00:21:38.180 Initialization complete. 00:21:38.180 INFO: using host memory buffer for IO 00:21:38.180 Hello world! 00:21:38.180 [2024-07-11 02:27:28.423907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:38.180 02:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:21:38.180 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.439 [2024-07-11 02:27:28.699027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:39.374 Initializing NVMe Controllers 00:21:39.374 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:39.374 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:39.374 Initialization complete. Launching workers. 00:21:39.374 submit (in ns) avg, min, max = 6513.1, 4506.7, 4002973.3 00:21:39.374 complete (in ns) avg, min, max = 30389.0, 2643.0, 6995985.2 00:21:39.374 00:21:39.374 Submit histogram 00:21:39.374 ================ 00:21:39.374 Range in us Cumulative Count 00:21:39.374 4.504 - 4.527: 0.1035% ( 12) 00:21:39.374 4.527 - 4.551: 0.7329% ( 73) 00:21:39.374 4.551 - 4.575: 2.7766% ( 237) 00:21:39.374 4.575 - 4.599: 6.1654% ( 393) 00:21:39.374 4.599 - 4.622: 9.8301% ( 425) 00:21:39.374 4.622 - 4.646: 13.0206% ( 370) 00:21:39.374 4.646 - 4.670: 15.2367% ( 257) 00:21:39.374 4.670 - 4.693: 16.6853% ( 168) 00:21:39.374 4.693 - 4.717: 17.8063% ( 130) 00:21:39.374 4.717 - 4.741: 19.4619% ( 192) 00:21:39.374 4.741 - 4.764: 22.7904% ( 386) 00:21:39.374 4.764 - 4.788: 28.8264% ( 700) 00:21:39.374 4.788 - 4.812: 34.6555% ( 676) 00:21:39.374 4.812 - 4.836: 39.3291% ( 542) 00:21:39.374 4.836 - 4.859: 41.6401% ( 268) 00:21:39.374 4.859 - 4.883: 44.0890% ( 284) 00:21:39.374 4.883 - 4.907: 45.4773% ( 161) 00:21:39.374 4.907 - 4.930: 46.4258% ( 110) 00:21:39.374 4.930 - 4.954: 47.5037% ( 125) 00:21:39.374 4.954 - 4.978: 49.5818% ( 241) 00:21:39.374 4.978 - 5.001: 50.9787% ( 162) 00:21:39.374 5.001 - 5.025: 52.7033% ( 200) 00:21:39.374 5.025 - 5.049: 53.5914% ( 103) 00:21:39.374 5.049 - 5.073: 54.5658% ( 113) 00:21:39.374 5.073 - 5.096: 54.8504% ( 33) 00:21:39.374 5.096 - 5.120: 55.0746% ( 26) 00:21:39.374 5.120 - 5.144: 55.2902% ( 25) 00:21:39.374 5.144 - 5.167: 56.1783% ( 103) 00:21:39.374 5.167 - 5.191: 58.1443% ( 228) 00:21:39.374 5.191 - 5.215: 61.4900% ( 388) 00:21:39.374 5.215 - 5.239: 64.5339% ( 353) 00:21:39.374 5.239 - 5.262: 66.4913% ( 227) 00:21:39.374 5.262 - 5.286: 67.8882% ( 162) 00:21:39.374 5.286 - 5.310: 69.3197% ( 166) 00:21:39.374 5.310 - 5.333: 70.5010% ( 137) 00:21:39.374 5.333 - 5.357: 73.0620% ( 297) 00:21:39.374 5.357 - 5.381: 75.3557% ( 266) 00:21:39.374 5.381 - 5.404: 77.0372% ( 195) 00:21:39.374 5.404 - 5.428: 78.2875% ( 145) 00:21:39.374 5.428 - 5.452: 79.6327% ( 156) 00:21:39.374 5.452 - 5.476: 80.5898% ( 111) 00:21:39.374 5.476 - 5.499: 80.9606% ( 43) 00:21:39.374 5.499 - 5.523: 81.2451% ( 33) 00:21:39.374 5.523 - 5.547: 81.4952% ( 29) 00:21:39.374 5.547 - 5.570: 83.0818% ( 184) 00:21:39.374 5.570 - 5.594: 86.7638% ( 427) 00:21:39.374 5.594 - 5.618: 90.7562% ( 463) 00:21:39.374 5.618 - 5.641: 92.7740% ( 234) 00:21:39.374 5.641 - 5.665: 94.1537% ( 160) 00:21:39.374 5.665 - 5.689: 95.5247% ( 159) 00:21:39.374 5.689 - 5.713: 95.9300% ( 47) 00:21:39.374 5.713 - 5.736: 96.1024% ( 20) 00:21:39.374 5.736 - 5.760: 96.2577% ( 18) 00:21:39.374 5.760 - 5.784: 96.3094% ( 6) 00:21:39.374 5.784 - 5.807: 96.3698% ( 7) 00:21:39.374 5.807 - 5.831: 96.4818% ( 13) 00:21:39.374 5.831 - 5.855: 96.6543% ( 20) 00:21:39.374 5.855 - 5.879: 96.7147% ( 7) 00:21:39.374 5.879 - 5.902: 96.8699% ( 18) 00:21:39.374 5.902 - 5.926: 96.9475% ( 9) 00:21:39.374 5.926 - 5.950: 97.0251% ( 9) 00:21:39.374 5.950 - 5.973: 97.1544% ( 15) 00:21:39.374 5.973 - 5.997: 97.1803% ( 3) 00:21:39.374 5.997 - 6.021: 97.2148% ( 4) 00:21:39.374 6.021 - 6.044: 97.2665% ( 6) 00:21:39.374 6.044 - 6.068: 97.2924% ( 3) 00:21:39.374 6.068 - 6.116: 97.3528% ( 7) 00:21:39.374 6.116 - 6.163: 97.3959% ( 5) 00:21:39.374 6.163 - 6.210: 97.4390% ( 5) 00:21:39.374 6.258 - 6.305: 97.4735% ( 4) 00:21:39.374 6.305 - 6.353: 97.6028% ( 15) 00:21:39.374 6.353 - 6.400: 97.7235% ( 14) 00:21:39.374 6.400 - 6.447: 97.7322% ( 1) 00:21:39.374 6.447 - 6.495: 97.7925% ( 7) 00:21:39.374 6.495 - 6.542: 97.8874% ( 11) 00:21:39.374 6.542 - 6.590: 97.9477% ( 7) 00:21:39.374 6.590 - 6.637: 97.9564% ( 1) 00:21:39.374 6.637 - 6.684: 97.9736% ( 2) 00:21:39.374 6.684 - 6.732: 98.0167% ( 5) 00:21:39.374 6.732 - 6.779: 98.0685% ( 6) 00:21:39.374 6.779 - 6.827: 98.0943% ( 3) 00:21:39.374 6.827 - 6.874: 98.1374% ( 5) 00:21:39.374 6.874 - 6.921: 98.3961% ( 30) 00:21:39.374 6.921 - 6.969: 98.6634% ( 31) 00:21:39.374 6.969 - 7.016: 98.9308% ( 31) 00:21:39.374 7.016 - 7.064: 99.0429% ( 13) 00:21:39.374 7.064 - 7.111: 99.0773% ( 4) 00:21:39.374 7.111 - 7.159: 99.1032% ( 3) 00:21:39.374 7.159 - 7.206: 99.1118% ( 1) 00:21:39.374 7.253 - 7.301: 99.1291% ( 2) 00:21:39.374 7.301 - 7.348: 99.1377% ( 1) 00:21:39.374 7.633 - 7.680: 99.1636% ( 3) 00:21:39.374 7.680 - 7.727: 99.1722% ( 1) 00:21:39.374 7.727 - 7.775: 99.1808% ( 1) 00:21:39.374 7.775 - 7.822: 99.1894% ( 1) 00:21:39.374 7.822 - 7.870: 99.2067% ( 2) 00:21:39.374 7.917 - 7.964: 99.2239% ( 2) 00:21:39.374 7.964 - 8.012: 99.2326% ( 1) 00:21:39.374 8.107 - 8.154: 99.2412% ( 1) 00:21:39.374 8.154 - 8.201: 99.2498% ( 1) 00:21:39.374 8.201 - 8.249: 99.2671% ( 2) 00:21:39.374 8.249 - 8.296: 99.2843% ( 2) 00:21:39.374 8.296 - 8.344: 99.2929% ( 1) 00:21:39.374 8.391 - 8.439: 99.3188% ( 3) 00:21:39.374 8.486 - 8.533: 99.3274% ( 1) 00:21:39.374 8.533 - 8.581: 99.3360% ( 1) 00:21:39.374 8.581 - 8.628: 99.3447% ( 1) 00:21:39.374 8.676 - 8.723: 99.3533% ( 1) 00:21:39.374 8.723 - 8.770: 99.3619% ( 1) 00:21:39.374 8.770 - 8.818: 99.3705% ( 1) 00:21:39.375 8.960 - 9.007: 99.3878% ( 2) 00:21:39.375 9.007 - 9.055: 99.4050% ( 2) 00:21:39.375 9.244 - 9.292: 99.4309% ( 3) 00:21:39.375 9.292 - 9.339: 99.4395% ( 1) 00:21:39.375 9.339 - 9.387: 99.4481% ( 1) 00:21:39.375 9.387 - 9.434: 99.4568% ( 1) 00:21:39.375 9.434 - 9.481: 99.4826% ( 3) 00:21:39.375 9.481 - 9.529: 99.5085% ( 3) 00:21:39.375 9.671 - 9.719: 99.5171% ( 1) 00:21:39.375 9.861 - 9.908: 99.5257% ( 1) 00:21:39.375 9.908 - 9.956: 99.5344% ( 1) 00:21:39.375 10.050 - 10.098: 99.5430% ( 1) 00:21:39.375 10.145 - 10.193: 99.5516% ( 1) 00:21:39.375 10.193 - 10.240: 99.5689% ( 2) 00:21:39.375 10.240 - 10.287: 99.5775% ( 1) 00:21:39.375 10.287 - 10.335: 99.5861% ( 1) 00:21:39.375 10.430 - 10.477: 99.5947% ( 1) 00:21:39.375 10.572 - 10.619: 99.6120% ( 2) 00:21:39.375 10.761 - 10.809: 99.6206% ( 1) 00:21:39.375 10.904 - 10.951: 99.6378% ( 2) 00:21:39.375 10.999 - 11.046: 99.6465% ( 1) 00:21:39.375 11.283 - 11.330: 99.6551% ( 1) 00:21:39.375 11.330 - 11.378: 99.6637% ( 1) 00:21:39.375 11.425 - 11.473: 99.6723% ( 1) 00:21:39.375 11.473 - 11.520: 99.6810% ( 1) 00:21:39.375 11.567 - 11.615: 99.6896% ( 1) 00:21:39.375 11.662 - 11.710: 99.6982% ( 1) 00:21:39.375 11.757 - 11.804: 99.7154% ( 2) 00:21:39.375 11.804 - 11.852: 99.7241% ( 1) 00:21:39.375 11.994 - 12.041: 99.7327% ( 1) 00:21:39.375 12.041 - 12.089: 99.7413% ( 1) 00:21:39.375 12.136 - 12.231: 99.7499% ( 1) 00:21:39.375 12.326 - 12.421: 99.7586% ( 1) 00:21:39.375 12.421 - 12.516: 99.7758% ( 2) 00:21:39.375 12.610 - 12.705: 99.7930% ( 2) 00:21:39.375 12.705 - 12.800: 99.8017% ( 1) 00:21:39.375 13.464 - 13.559: 99.8275% ( 3) 00:21:39.375 13.559 - 13.653: 99.8362% ( 1) 00:21:39.375 13.653 - 13.748: 99.8448% ( 1) 00:21:39.375 13.748 - 13.843: 99.8534% ( 1) 00:21:39.375 13.843 - 13.938: 99.9051% ( 6) 00:21:39.375 13.938 - 14.033: 99.9310% ( 3) 00:21:39.375 14.033 - 14.127: 99.9396% ( 1) 00:21:39.375 14.222 - 14.317: 99.9483% ( 1) 00:21:39.375 14.317 - 14.412: 99.9569% ( 1) 00:21:39.375 14.696 - 14.791: 99.9655% ( 1) 00:21:39.375 3980.705 - 4004.978: 100.0000% ( 4) 00:21:39.375 00:21:39.375 Complete histogram 00:21:39.375 ================== 00:21:39.375 Range in us Cumulative Count 00:21:39.375 2.643 - 2.655: 4.8892% ( 567) 00:21:39.375 2.655 - 2.667: 39.6137% ( 4027) 00:21:39.375 2.667 - 2.679: 52.6343% ( 1510) 00:21:39.375 2.679 - 2.690: 58.9980% ( 738) 00:21:39.375 2.690 - 2.702: 79.6672% ( 2397) 00:21:39.375 2.702 - 2.714: 91.0408% ( 1319) 00:21:39.375 2.714 - 2.726: 95.4902% ( 516) 00:21:39.375 2.726 - 2.7[2024-07-11 02:27:29.720291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:39.375 38: 97.3441% ( 215) 00:21:39.375 2.738 - 2.750: 98.1288% ( 91) 00:21:39.375 2.750 - 2.761: 98.3616% ( 27) 00:21:39.375 2.761 - 2.773: 98.4479% ( 10) 00:21:39.375 2.773 - 2.785: 98.4996% ( 6) 00:21:39.375 2.785 - 2.797: 98.5600% ( 7) 00:21:39.375 2.797 - 2.809: 98.5772% ( 2) 00:21:39.375 2.809 - 2.821: 98.6031% ( 3) 00:21:39.375 2.821 - 2.833: 98.6290% ( 3) 00:21:39.375 2.833 - 2.844: 98.6462% ( 2) 00:21:39.375 2.844 - 2.856: 98.6548% ( 1) 00:21:39.375 2.868 - 2.880: 98.6634% ( 1) 00:21:39.375 2.904 - 2.916: 98.6807% ( 2) 00:21:39.375 2.916 - 2.927: 98.6893% ( 1) 00:21:39.375 2.987 - 2.999: 98.6979% ( 1) 00:21:39.375 3.105 - 3.129: 98.7066% ( 1) 00:21:39.375 3.176 - 3.200: 98.7152% ( 1) 00:21:39.375 3.200 - 3.224: 98.7238% ( 1) 00:21:39.375 3.224 - 3.247: 98.7583% ( 4) 00:21:39.375 3.247 - 3.271: 98.7842% ( 3) 00:21:39.375 3.271 - 3.295: 98.8100% ( 3) 00:21:39.375 3.295 - 3.319: 98.8618% ( 6) 00:21:39.375 3.319 - 3.342: 98.8876% ( 3) 00:21:39.375 3.342 - 3.366: 98.9480% ( 7) 00:21:39.375 3.366 - 3.390: 98.9825% ( 4) 00:21:39.375 3.390 - 3.413: 98.9997% ( 2) 00:21:39.375 3.508 - 3.532: 99.0084% ( 1) 00:21:39.375 3.603 - 3.627: 99.0170% ( 1) 00:21:39.375 4.053 - 4.077: 99.0256% ( 1) 00:21:39.375 4.338 - 4.361: 99.0342% ( 1) 00:21:39.375 4.385 - 4.409: 99.0429% ( 1) 00:21:39.375 4.527 - 4.551: 99.0601% ( 2) 00:21:39.375 5.618 - 5.641: 99.0687% ( 1) 00:21:39.375 5.831 - 5.855: 99.0773% ( 1) 00:21:39.375 5.855 - 5.879: 99.0860% ( 1) 00:21:39.375 6.068 - 6.116: 99.0946% ( 1) 00:21:39.375 6.116 - 6.163: 99.1205% ( 3) 00:21:39.375 6.163 - 6.210: 99.1377% ( 2) 00:21:39.375 6.210 - 6.258: 99.1550% ( 2) 00:21:39.375 6.353 - 6.400: 99.1636% ( 1) 00:21:39.375 6.400 - 6.447: 99.1981% ( 4) 00:21:39.375 6.590 - 6.637: 99.2153% ( 2) 00:21:39.375 6.637 - 6.684: 99.2239% ( 1) 00:21:39.375 6.732 - 6.779: 99.2326% ( 1) 00:21:39.375 7.016 - 7.064: 99.2412% ( 1) 00:21:39.375 7.159 - 7.206: 99.2498% ( 1) 00:21:39.375 7.301 - 7.348: 99.2757% ( 3) 00:21:39.375 7.633 - 7.680: 99.2843% ( 1) 00:21:39.375 7.822 - 7.870: 99.2929% ( 1) 00:21:39.375 7.870 - 7.917: 99.3015% ( 1) 00:21:39.375 13.179 - 13.274: 99.3102% ( 1) 00:21:39.375 3009.801 - 3021.938: 99.3188% ( 1) 00:21:39.375 3203.982 - 3228.255: 99.3274% ( 1) 00:21:39.375 3980.705 - 4004.978: 99.7844% ( 53) 00:21:39.375 4004.978 - 4029.250: 99.9914% ( 24) 00:21:39.375 6990.507 - 7039.052: 100.0000% ( 1) 00:21:39.375 00:21:39.375 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:21:39.375 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:21:39.375 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:21:39.375 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:21:39.375 02:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:39.939 [ 00:21:39.939 { 00:21:39.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:39.939 "subtype": "Discovery", 00:21:39.939 "listen_addresses": [], 00:21:39.939 "allow_any_host": true, 00:21:39.939 "hosts": [] 00:21:39.939 }, 00:21:39.939 { 00:21:39.939 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:39.939 "subtype": "NVMe", 00:21:39.939 "listen_addresses": [ 00:21:39.939 { 00:21:39.939 "trtype": "VFIOUSER", 00:21:39.939 "adrfam": "IPv4", 00:21:39.939 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:39.939 "trsvcid": "0" 00:21:39.939 } 00:21:39.939 ], 00:21:39.939 "allow_any_host": true, 00:21:39.939 "hosts": [], 00:21:39.939 "serial_number": "SPDK1", 00:21:39.939 "model_number": "SPDK bdev Controller", 00:21:39.939 "max_namespaces": 32, 00:21:39.939 "min_cntlid": 1, 00:21:39.939 "max_cntlid": 65519, 00:21:39.939 "namespaces": [ 00:21:39.939 { 00:21:39.939 "nsid": 1, 00:21:39.939 "bdev_name": "Malloc1", 00:21:39.939 "name": "Malloc1", 00:21:39.939 "nguid": "1636D2B0D3E04F90BDC1F1882697AB47", 00:21:39.939 "uuid": "1636d2b0-d3e0-4f90-bdc1-f1882697ab47" 00:21:39.939 } 00:21:39.939 ] 00:21:39.939 }, 00:21:39.939 { 00:21:39.939 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:39.939 "subtype": "NVMe", 00:21:39.939 "listen_addresses": [ 00:21:39.939 { 00:21:39.939 "trtype": "VFIOUSER", 00:21:39.940 "adrfam": "IPv4", 00:21:39.940 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:39.940 "trsvcid": "0" 00:21:39.940 } 00:21:39.940 ], 00:21:39.940 "allow_any_host": true, 00:21:39.940 "hosts": [], 00:21:39.940 "serial_number": "SPDK2", 00:21:39.940 "model_number": "SPDK bdev Controller", 00:21:39.940 "max_namespaces": 32, 00:21:39.940 "min_cntlid": 1, 00:21:39.940 "max_cntlid": 65519, 00:21:39.940 "namespaces": [ 00:21:39.940 { 00:21:39.940 "nsid": 1, 00:21:39.940 "bdev_name": "Malloc2", 00:21:39.940 "name": "Malloc2", 00:21:39.940 "nguid": "C0464627CA7A4B37861604C5EF602DEE", 00:21:39.940 "uuid": "c0464627-ca7a-4b37-8616-04c5ef602dee" 00:21:39.940 } 00:21:39.940 ] 00:21:39.940 } 00:21:39.940 ] 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1821976 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:21:39.940 02:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:21:39.940 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.940 [2024-07-11 02:27:30.236126] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:40.196 Malloc3 00:21:40.196 02:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:21:40.454 [2024-07-11 02:27:30.692646] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:40.454 02:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:21:40.454 Asynchronous Event Request test 00:21:40.454 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:21:40.454 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:21:40.454 Registering asynchronous event callbacks... 00:21:40.454 Starting namespace attribute notice tests for all controllers... 00:21:40.454 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:40.454 aer_cb - Changed Namespace 00:21:40.454 Cleaning up... 00:21:40.712 [ 00:21:40.712 { 00:21:40.712 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:40.712 "subtype": "Discovery", 00:21:40.712 "listen_addresses": [], 00:21:40.712 "allow_any_host": true, 00:21:40.712 "hosts": [] 00:21:40.712 }, 00:21:40.712 { 00:21:40.712 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:21:40.712 "subtype": "NVMe", 00:21:40.712 "listen_addresses": [ 00:21:40.712 { 00:21:40.712 "trtype": "VFIOUSER", 00:21:40.712 "adrfam": "IPv4", 00:21:40.712 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:21:40.712 "trsvcid": "0" 00:21:40.712 } 00:21:40.712 ], 00:21:40.712 "allow_any_host": true, 00:21:40.712 "hosts": [], 00:21:40.712 "serial_number": "SPDK1", 00:21:40.712 "model_number": "SPDK bdev Controller", 00:21:40.712 "max_namespaces": 32, 00:21:40.712 "min_cntlid": 1, 00:21:40.712 "max_cntlid": 65519, 00:21:40.712 "namespaces": [ 00:21:40.712 { 00:21:40.712 "nsid": 1, 00:21:40.712 "bdev_name": "Malloc1", 00:21:40.712 "name": "Malloc1", 00:21:40.712 "nguid": "1636D2B0D3E04F90BDC1F1882697AB47", 00:21:40.712 "uuid": "1636d2b0-d3e0-4f90-bdc1-f1882697ab47" 00:21:40.712 }, 00:21:40.712 { 00:21:40.712 "nsid": 2, 00:21:40.712 "bdev_name": "Malloc3", 00:21:40.712 "name": "Malloc3", 00:21:40.712 "nguid": "BDBA23641FCF40069A9E5FD3D0715222", 00:21:40.712 "uuid": "bdba2364-1fcf-4006-9a9e-5fd3d0715222" 00:21:40.712 } 00:21:40.712 ] 00:21:40.712 }, 00:21:40.712 { 00:21:40.712 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:21:40.712 "subtype": "NVMe", 00:21:40.712 "listen_addresses": [ 00:21:40.712 { 00:21:40.712 "trtype": "VFIOUSER", 00:21:40.712 "adrfam": "IPv4", 00:21:40.712 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:21:40.712 "trsvcid": "0" 00:21:40.712 } 00:21:40.712 ], 00:21:40.712 "allow_any_host": true, 00:21:40.712 "hosts": [], 00:21:40.713 "serial_number": "SPDK2", 00:21:40.713 "model_number": "SPDK bdev Controller", 00:21:40.713 "max_namespaces": 32, 00:21:40.713 "min_cntlid": 1, 00:21:40.713 "max_cntlid": 65519, 00:21:40.713 "namespaces": [ 00:21:40.713 { 00:21:40.713 "nsid": 1, 00:21:40.713 "bdev_name": "Malloc2", 00:21:40.713 "name": "Malloc2", 00:21:40.713 "nguid": "C0464627CA7A4B37861604C5EF602DEE", 00:21:40.713 "uuid": "c0464627-ca7a-4b37-8616-04c5ef602dee" 00:21:40.713 } 00:21:40.713 ] 00:21:40.713 } 00:21:40.713 ] 00:21:40.713 02:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1821976 00:21:40.713 02:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:40.713 02:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:21:40.713 02:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:21:40.713 02:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:21:40.713 [2024-07-11 02:27:31.024034] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:21:40.713 [2024-07-11 02:27:31.024085] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1822073 ] 00:21:40.713 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.713 [2024-07-11 02:27:31.067352] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:21:40.713 [2024-07-11 02:27:31.076865] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:40.713 [2024-07-11 02:27:31.076898] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe15e6aa000 00:21:40.713 [2024-07-11 02:27:31.077863] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:40.713 [2024-07-11 02:27:31.078868] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:40.713 [2024-07-11 02:27:31.079878] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:40.713 [2024-07-11 02:27:31.080883] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:40.713 [2024-07-11 02:27:31.081890] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:40.713 [2024-07-11 02:27:31.082891] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:40.713 [2024-07-11 02:27:31.083900] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:40.713 [2024-07-11 02:27:31.084909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:40.713 [2024-07-11 02:27:31.085914] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:40.713 [2024-07-11 02:27:31.085938] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe15d45e000 00:21:40.713 [2024-07-11 02:27:31.087416] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:40.713 [2024-07-11 02:27:31.107452] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:21:40.713 [2024-07-11 02:27:31.107490] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:21:40.713 [2024-07-11 02:27:31.109588] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:21:40.713 [2024-07-11 02:27:31.109647] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:21:40.713 [2024-07-11 02:27:31.109744] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:21:40.713 [2024-07-11 02:27:31.109773] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:21:40.713 [2024-07-11 02:27:31.109785] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:21:40.713 [2024-07-11 02:27:31.110592] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:21:40.713 [2024-07-11 02:27:31.110622] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:21:40.713 [2024-07-11 02:27:31.110636] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:21:40.713 [2024-07-11 02:27:31.111594] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:21:40.713 [2024-07-11 02:27:31.111615] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:21:40.713 [2024-07-11 02:27:31.111630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:21:40.713 [2024-07-11 02:27:31.112601] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:21:40.713 [2024-07-11 02:27:31.112624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:40.713 [2024-07-11 02:27:31.113617] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:21:40.713 [2024-07-11 02:27:31.113639] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:21:40.713 [2024-07-11 02:27:31.113650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:21:40.713 [2024-07-11 02:27:31.113663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:40.713 [2024-07-11 02:27:31.113775] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:21:40.713 [2024-07-11 02:27:31.113785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:40.713 [2024-07-11 02:27:31.113795] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:21:40.713 [2024-07-11 02:27:31.114617] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:21:40.713 [2024-07-11 02:27:31.115624] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:21:40.713 [2024-07-11 02:27:31.116628] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:21:40.713 [2024-07-11 02:27:31.117620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:40.713 [2024-07-11 02:27:31.117696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:40.713 [2024-07-11 02:27:31.118633] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:21:40.713 [2024-07-11 02:27:31.118654] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:40.713 [2024-07-11 02:27:31.118665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:21:40.713 [2024-07-11 02:27:31.118694] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:21:40.713 [2024-07-11 02:27:31.118714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:21:40.713 [2024-07-11 02:27:31.118737] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:40.713 [2024-07-11 02:27:31.118748] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:40.713 [2024-07-11 02:27:31.118768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:40.713 [2024-07-11 02:27:31.127549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:21:40.713 [2024-07-11 02:27:31.127574] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:21:40.713 [2024-07-11 02:27:31.127589] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:21:40.713 [2024-07-11 02:27:31.127599] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:21:40.713 [2024-07-11 02:27:31.127608] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:21:40.714 [2024-07-11 02:27:31.127617] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:21:40.714 [2024-07-11 02:27:31.127627] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:21:40.714 [2024-07-11 02:27:31.127636] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:21:40.714 [2024-07-11 02:27:31.127651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:21:40.714 [2024-07-11 02:27:31.127669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:21:40.972 [2024-07-11 02:27:31.135525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:21:40.972 [2024-07-11 02:27:31.135563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.972 [2024-07-11 02:27:31.135580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.972 [2024-07-11 02:27:31.135594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.972 [2024-07-11 02:27:31.135608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:40.972 [2024-07-11 02:27:31.135619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.135635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.135656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:21:40.972 [2024-07-11 02:27:31.143524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:21:40.972 [2024-07-11 02:27:31.143544] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:21:40.972 [2024-07-11 02:27:31.143565] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.143579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.143591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.143608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:40.972 [2024-07-11 02:27:31.151709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:21:40.972 [2024-07-11 02:27:31.151791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.151808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.151824] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:21:40.972 [2024-07-11 02:27:31.151834] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:21:40.972 [2024-07-11 02:27:31.151846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:21:40.972 [2024-07-11 02:27:31.159523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:21:40.972 [2024-07-11 02:27:31.159549] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:21:40.972 [2024-07-11 02:27:31.159569] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.159586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.159601] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:40.972 [2024-07-11 02:27:31.159610] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:40.972 [2024-07-11 02:27:31.159622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:40.972 [2024-07-11 02:27:31.167522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:21:40.972 [2024-07-11 02:27:31.167552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.167571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.167586] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:40.972 [2024-07-11 02:27:31.167596] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:40.972 [2024-07-11 02:27:31.167607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:40.972 [2024-07-11 02:27:31.175524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:21:40.972 [2024-07-11 02:27:31.175546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.175561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.175577] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.175590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.175600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.175610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:21:40.972 [2024-07-11 02:27:31.175620] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:21:40.973 [2024-07-11 02:27:31.175629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:21:40.973 [2024-07-11 02:27:31.175640] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:21:40.973 [2024-07-11 02:27:31.175667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:21:40.973 [2024-07-11 02:27:31.183522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.183550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:21:40.973 [2024-07-11 02:27:31.191520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.191548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:21:40.973 [2024-07-11 02:27:31.199520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.199547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:40.973 [2024-07-11 02:27:31.207528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.207573] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:21:40.973 [2024-07-11 02:27:31.207585] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:21:40.973 [2024-07-11 02:27:31.207593] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:21:40.973 [2024-07-11 02:27:31.207600] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:21:40.973 [2024-07-11 02:27:31.207612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:21:40.973 [2024-07-11 02:27:31.207625] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:21:40.973 [2024-07-11 02:27:31.207635] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:21:40.973 [2024-07-11 02:27:31.207645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:21:40.973 [2024-07-11 02:27:31.207662] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:21:40.973 [2024-07-11 02:27:31.207672] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:40.973 [2024-07-11 02:27:31.207683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:40.973 [2024-07-11 02:27:31.207697] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:21:40.973 [2024-07-11 02:27:31.207706] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:21:40.973 [2024-07-11 02:27:31.207716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:21:40.973 [2024-07-11 02:27:31.215543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.215572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.215593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.215606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:21:40.973 ===================================================== 00:21:40.973 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:40.973 ===================================================== 00:21:40.973 Controller Capabilities/Features 00:21:40.973 ================================ 00:21:40.973 Vendor ID: 4e58 00:21:40.973 Subsystem Vendor ID: 4e58 00:21:40.973 Serial Number: SPDK2 00:21:40.973 Model Number: SPDK bdev Controller 00:21:40.973 Firmware Version: 24.09 00:21:40.973 Recommended Arb Burst: 6 00:21:40.973 IEEE OUI Identifier: 8d 6b 50 00:21:40.973 Multi-path I/O 00:21:40.973 May have multiple subsystem ports: Yes 00:21:40.973 May have multiple controllers: Yes 00:21:40.973 Associated with SR-IOV VF: No 00:21:40.973 Max Data Transfer Size: 131072 00:21:40.973 Max Number of Namespaces: 32 00:21:40.973 Max Number of I/O Queues: 127 00:21:40.973 NVMe Specification Version (VS): 1.3 00:21:40.973 NVMe Specification Version (Identify): 1.3 00:21:40.973 Maximum Queue Entries: 256 00:21:40.973 Contiguous Queues Required: Yes 00:21:40.973 Arbitration Mechanisms Supported 00:21:40.973 Weighted Round Robin: Not Supported 00:21:40.973 Vendor Specific: Not Supported 00:21:40.973 Reset Timeout: 15000 ms 00:21:40.973 Doorbell Stride: 4 bytes 00:21:40.973 NVM Subsystem Reset: Not Supported 00:21:40.973 Command Sets Supported 00:21:40.973 NVM Command Set: Supported 00:21:40.973 Boot Partition: Not Supported 00:21:40.973 Memory Page Size Minimum: 4096 bytes 00:21:40.973 Memory Page Size Maximum: 4096 bytes 00:21:40.973 Persistent Memory Region: Not Supported 00:21:40.973 Optional Asynchronous Events Supported 00:21:40.973 Namespace Attribute Notices: Supported 00:21:40.973 Firmware Activation Notices: Not Supported 00:21:40.973 ANA Change Notices: Not Supported 00:21:40.973 PLE Aggregate Log Change Notices: Not Supported 00:21:40.973 LBA Status Info Alert Notices: Not Supported 00:21:40.973 EGE Aggregate Log Change Notices: Not Supported 00:21:40.973 Normal NVM Subsystem Shutdown event: Not Supported 00:21:40.973 Zone Descriptor Change Notices: Not Supported 00:21:40.973 Discovery Log Change Notices: Not Supported 00:21:40.973 Controller Attributes 00:21:40.973 128-bit Host Identifier: Supported 00:21:40.973 Non-Operational Permissive Mode: Not Supported 00:21:40.973 NVM Sets: Not Supported 00:21:40.973 Read Recovery Levels: Not Supported 00:21:40.973 Endurance Groups: Not Supported 00:21:40.973 Predictable Latency Mode: Not Supported 00:21:40.973 Traffic Based Keep ALive: Not Supported 00:21:40.973 Namespace Granularity: Not Supported 00:21:40.973 SQ Associations: Not Supported 00:21:40.973 UUID List: Not Supported 00:21:40.973 Multi-Domain Subsystem: Not Supported 00:21:40.973 Fixed Capacity Management: Not Supported 00:21:40.973 Variable Capacity Management: Not Supported 00:21:40.973 Delete Endurance Group: Not Supported 00:21:40.973 Delete NVM Set: Not Supported 00:21:40.973 Extended LBA Formats Supported: Not Supported 00:21:40.973 Flexible Data Placement Supported: Not Supported 00:21:40.973 00:21:40.973 Controller Memory Buffer Support 00:21:40.973 ================================ 00:21:40.973 Supported: No 00:21:40.973 00:21:40.973 Persistent Memory Region Support 00:21:40.973 ================================ 00:21:40.973 Supported: No 00:21:40.973 00:21:40.973 Admin Command Set Attributes 00:21:40.973 ============================ 00:21:40.973 Security Send/Receive: Not Supported 00:21:40.973 Format NVM: Not Supported 00:21:40.973 Firmware Activate/Download: Not Supported 00:21:40.973 Namespace Management: Not Supported 00:21:40.973 Device Self-Test: Not Supported 00:21:40.973 Directives: Not Supported 00:21:40.973 NVMe-MI: Not Supported 00:21:40.973 Virtualization Management: Not Supported 00:21:40.973 Doorbell Buffer Config: Not Supported 00:21:40.973 Get LBA Status Capability: Not Supported 00:21:40.973 Command & Feature Lockdown Capability: Not Supported 00:21:40.973 Abort Command Limit: 4 00:21:40.973 Async Event Request Limit: 4 00:21:40.973 Number of Firmware Slots: N/A 00:21:40.973 Firmware Slot 1 Read-Only: N/A 00:21:40.973 Firmware Activation Without Reset: N/A 00:21:40.973 Multiple Update Detection Support: N/A 00:21:40.973 Firmware Update Granularity: No Information Provided 00:21:40.973 Per-Namespace SMART Log: No 00:21:40.973 Asymmetric Namespace Access Log Page: Not Supported 00:21:40.973 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:21:40.973 Command Effects Log Page: Supported 00:21:40.973 Get Log Page Extended Data: Supported 00:21:40.973 Telemetry Log Pages: Not Supported 00:21:40.973 Persistent Event Log Pages: Not Supported 00:21:40.973 Supported Log Pages Log Page: May Support 00:21:40.973 Commands Supported & Effects Log Page: Not Supported 00:21:40.973 Feature Identifiers & Effects Log Page:May Support 00:21:40.973 NVMe-MI Commands & Effects Log Page: May Support 00:21:40.973 Data Area 4 for Telemetry Log: Not Supported 00:21:40.973 Error Log Page Entries Supported: 128 00:21:40.973 Keep Alive: Supported 00:21:40.973 Keep Alive Granularity: 10000 ms 00:21:40.973 00:21:40.973 NVM Command Set Attributes 00:21:40.973 ========================== 00:21:40.973 Submission Queue Entry Size 00:21:40.973 Max: 64 00:21:40.973 Min: 64 00:21:40.973 Completion Queue Entry Size 00:21:40.973 Max: 16 00:21:40.973 Min: 16 00:21:40.973 Number of Namespaces: 32 00:21:40.973 Compare Command: Supported 00:21:40.973 Write Uncorrectable Command: Not Supported 00:21:40.973 Dataset Management Command: Supported 00:21:40.973 Write Zeroes Command: Supported 00:21:40.973 Set Features Save Field: Not Supported 00:21:40.973 Reservations: Not Supported 00:21:40.973 Timestamp: Not Supported 00:21:40.973 Copy: Supported 00:21:40.973 Volatile Write Cache: Present 00:21:40.973 Atomic Write Unit (Normal): 1 00:21:40.973 Atomic Write Unit (PFail): 1 00:21:40.973 Atomic Compare & Write Unit: 1 00:21:40.973 Fused Compare & Write: Supported 00:21:40.973 Scatter-Gather List 00:21:40.973 SGL Command Set: Supported (Dword aligned) 00:21:40.973 SGL Keyed: Not Supported 00:21:40.973 SGL Bit Bucket Descriptor: Not Supported 00:21:40.973 SGL Metadata Pointer: Not Supported 00:21:40.973 Oversized SGL: Not Supported 00:21:40.973 SGL Metadata Address: Not Supported 00:21:40.973 SGL Offset: Not Supported 00:21:40.973 Transport SGL Data Block: Not Supported 00:21:40.973 Replay Protected Memory Block: Not Supported 00:21:40.973 00:21:40.973 Firmware Slot Information 00:21:40.973 ========================= 00:21:40.973 Active slot: 1 00:21:40.973 Slot 1 Firmware Revision: 24.09 00:21:40.973 00:21:40.973 00:21:40.973 Commands Supported and Effects 00:21:40.973 ============================== 00:21:40.973 Admin Commands 00:21:40.973 -------------- 00:21:40.973 Get Log Page (02h): Supported 00:21:40.973 Identify (06h): Supported 00:21:40.973 Abort (08h): Supported 00:21:40.973 Set Features (09h): Supported 00:21:40.973 Get Features (0Ah): Supported 00:21:40.973 Asynchronous Event Request (0Ch): Supported 00:21:40.973 Keep Alive (18h): Supported 00:21:40.973 I/O Commands 00:21:40.973 ------------ 00:21:40.973 Flush (00h): Supported LBA-Change 00:21:40.973 Write (01h): Supported LBA-Change 00:21:40.973 Read (02h): Supported 00:21:40.973 Compare (05h): Supported 00:21:40.973 Write Zeroes (08h): Supported LBA-Change 00:21:40.973 Dataset Management (09h): Supported LBA-Change 00:21:40.973 Copy (19h): Supported LBA-Change 00:21:40.973 00:21:40.973 Error Log 00:21:40.973 ========= 00:21:40.973 00:21:40.973 Arbitration 00:21:40.973 =========== 00:21:40.973 Arbitration Burst: 1 00:21:40.973 00:21:40.973 Power Management 00:21:40.973 ================ 00:21:40.973 Number of Power States: 1 00:21:40.973 Current Power State: Power State #0 00:21:40.973 Power State #0: 00:21:40.973 Max Power: 0.00 W 00:21:40.973 Non-Operational State: Operational 00:21:40.973 Entry Latency: Not Reported 00:21:40.973 Exit Latency: Not Reported 00:21:40.973 Relative Read Throughput: 0 00:21:40.973 Relative Read Latency: 0 00:21:40.973 Relative Write Throughput: 0 00:21:40.973 Relative Write Latency: 0 00:21:40.973 Idle Power: Not Reported 00:21:40.973 Active Power: Not Reported 00:21:40.973 Non-Operational Permissive Mode: Not Supported 00:21:40.973 00:21:40.973 Health Information 00:21:40.973 ================== 00:21:40.973 Critical Warnings: 00:21:40.973 Available Spare Space: OK 00:21:40.973 Temperature: OK 00:21:40.973 Device Reliability: OK 00:21:40.973 Read Only: No 00:21:40.973 Volatile Memory Backup: OK 00:21:40.973 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:40.973 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:40.973 Available Spare: 0% 00:21:40.973 Available Sp[2024-07-11 02:27:31.215741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:21:40.973 [2024-07-11 02:27:31.223540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.223600] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:21:40.973 [2024-07-11 02:27:31.223626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.223638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.223650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.223661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:40.973 [2024-07-11 02:27:31.223734] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:21:40.973 [2024-07-11 02:27:31.223757] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:21:40.973 [2024-07-11 02:27:31.224738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:40.973 [2024-07-11 02:27:31.224819] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:21:40.973 [2024-07-11 02:27:31.224835] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:21:40.973 [2024-07-11 02:27:31.225743] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:21:40.973 [2024-07-11 02:27:31.225769] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:21:40.973 [2024-07-11 02:27:31.225845] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:21:40.973 [2024-07-11 02:27:31.228532] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:40.973 are Threshold: 0% 00:21:40.973 Life Percentage Used: 0% 00:21:40.973 Data Units Read: 0 00:21:40.973 Data Units Written: 0 00:21:40.973 Host Read Commands: 0 00:21:40.973 Host Write Commands: 0 00:21:40.973 Controller Busy Time: 0 minutes 00:21:40.973 Power Cycles: 0 00:21:40.973 Power On Hours: 0 hours 00:21:40.973 Unsafe Shutdowns: 0 00:21:40.973 Unrecoverable Media Errors: 0 00:21:40.973 Lifetime Error Log Entries: 0 00:21:40.973 Warning Temperature Time: 0 minutes 00:21:40.973 Critical Temperature Time: 0 minutes 00:21:40.974 00:21:40.974 Number of Queues 00:21:40.974 ================ 00:21:40.974 Number of I/O Submission Queues: 127 00:21:40.974 Number of I/O Completion Queues: 127 00:21:40.974 00:21:40.974 Active Namespaces 00:21:40.974 ================= 00:21:40.974 Namespace ID:1 00:21:40.974 Error Recovery Timeout: Unlimited 00:21:40.974 Command Set Identifier: NVM (00h) 00:21:40.974 Deallocate: Supported 00:21:40.974 Deallocated/Unwritten Error: Not Supported 00:21:40.974 Deallocated Read Value: Unknown 00:21:40.974 Deallocate in Write Zeroes: Not Supported 00:21:40.974 Deallocated Guard Field: 0xFFFF 00:21:40.974 Flush: Supported 00:21:40.974 Reservation: Supported 00:21:40.974 Namespace Sharing Capabilities: Multiple Controllers 00:21:40.974 Size (in LBAs): 131072 (0GiB) 00:21:40.974 Capacity (in LBAs): 131072 (0GiB) 00:21:40.974 Utilization (in LBAs): 131072 (0GiB) 00:21:40.974 NGUID: C0464627CA7A4B37861604C5EF602DEE 00:21:40.974 UUID: c0464627-ca7a-4b37-8616-04c5ef602dee 00:21:40.974 Thin Provisioning: Not Supported 00:21:40.974 Per-NS Atomic Units: Yes 00:21:40.974 Atomic Boundary Size (Normal): 0 00:21:40.974 Atomic Boundary Size (PFail): 0 00:21:40.974 Atomic Boundary Offset: 0 00:21:40.974 Maximum Single Source Range Length: 65535 00:21:40.974 Maximum Copy Length: 65535 00:21:40.974 Maximum Source Range Count: 1 00:21:40.974 NGUID/EUI64 Never Reused: No 00:21:40.974 Namespace Write Protected: No 00:21:40.974 Number of LBA Formats: 1 00:21:40.974 Current LBA Format: LBA Format #00 00:21:40.974 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:40.974 00:21:40.974 02:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:21:40.974 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.232 [2024-07-11 02:27:31.450518] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:46.508 Initializing NVMe Controllers 00:21:46.508 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:46.508 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:21:46.508 Initialization complete. Launching workers. 00:21:46.508 ======================================================== 00:21:46.508 Latency(us) 00:21:46.508 Device Information : IOPS MiB/s Average min max 00:21:46.508 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24083.58 94.08 5314.80 1493.14 7615.73 00:21:46.508 ======================================================== 00:21:46.508 Total : 24083.58 94.08 5314.80 1493.14 7615.73 00:21:46.508 00:21:46.508 [2024-07-11 02:27:36.556839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:46.508 02:27:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:21:46.508 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.508 [2024-07-11 02:27:36.791525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:51.782 Initializing NVMe Controllers 00:21:51.782 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:51.782 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:21:51.782 Initialization complete. Launching workers. 00:21:51.782 ======================================================== 00:21:51.782 Latency(us) 00:21:51.782 Device Information : IOPS MiB/s Average min max 00:21:51.782 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24111.00 94.18 5309.49 1498.80 10541.53 00:21:51.782 ======================================================== 00:21:51.782 Total : 24111.00 94.18 5309.49 1498.80 10541.53 00:21:51.782 00:21:51.782 [2024-07-11 02:27:41.814027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:51.782 02:27:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:21:51.782 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.782 [2024-07-11 02:27:42.035546] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:21:57.067 [2024-07-11 02:27:47.174675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:21:57.068 Initializing NVMe Controllers 00:21:57.068 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:57.068 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:21:57.068 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:21:57.068 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:21:57.068 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:21:57.068 Initialization complete. Launching workers. 00:21:57.068 Starting thread on core 2 00:21:57.068 Starting thread on core 3 00:21:57.068 Starting thread on core 1 00:21:57.068 02:27:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:21:57.068 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.068 [2024-07-11 02:27:47.456569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:00.433 [2024-07-11 02:27:50.525296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:00.433 Initializing NVMe Controllers 00:22:00.433 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:22:00.433 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:22:00.433 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:22:00.433 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:22:00.433 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:22:00.433 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:22:00.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:22:00.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:22:00.433 Initialization complete. Launching workers. 00:22:00.433 Starting thread on core 1 with urgent priority queue 00:22:00.433 Starting thread on core 2 with urgent priority queue 00:22:00.433 Starting thread on core 3 with urgent priority queue 00:22:00.433 Starting thread on core 0 with urgent priority queue 00:22:00.433 SPDK bdev Controller (SPDK2 ) core 0: 8332.00 IO/s 12.00 secs/100000 ios 00:22:00.433 SPDK bdev Controller (SPDK2 ) core 1: 6669.67 IO/s 14.99 secs/100000 ios 00:22:00.434 SPDK bdev Controller (SPDK2 ) core 2: 7076.67 IO/s 14.13 secs/100000 ios 00:22:00.434 SPDK bdev Controller (SPDK2 ) core 3: 6974.33 IO/s 14.34 secs/100000 ios 00:22:00.434 ======================================================== 00:22:00.434 00:22:00.434 02:27:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:22:00.434 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.434 [2024-07-11 02:27:50.799351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:00.434 Initializing NVMe Controllers 00:22:00.434 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:22:00.434 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:22:00.434 Namespace ID: 1 size: 0GB 00:22:00.434 Initialization complete. 00:22:00.434 INFO: using host memory buffer for IO 00:22:00.434 Hello world! 00:22:00.434 [2024-07-11 02:27:50.810493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:00.692 02:27:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:22:00.692 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.692 [2024-07-11 02:27:51.078832] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:02.072 Initializing NVMe Controllers 00:22:02.072 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:22:02.072 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:22:02.072 Initialization complete. Launching workers. 00:22:02.072 submit (in ns) avg, min, max = 9356.5, 4485.9, 4024207.4 00:22:02.072 complete (in ns) avg, min, max = 28992.2, 2656.3, 7989840.0 00:22:02.072 00:22:02.072 Submit histogram 00:22:02.072 ================ 00:22:02.072 Range in us Cumulative Count 00:22:02.072 4.480 - 4.504: 0.2330% ( 27) 00:22:02.072 4.504 - 4.527: 1.0012% ( 89) 00:22:02.072 4.527 - 4.551: 2.8828% ( 218) 00:22:02.072 4.551 - 4.575: 6.3093% ( 397) 00:22:02.072 4.575 - 4.599: 10.0811% ( 437) 00:22:02.072 4.599 - 4.622: 13.2660% ( 369) 00:22:02.072 4.622 - 4.646: 15.2684% ( 232) 00:22:02.072 4.646 - 4.670: 16.4336% ( 135) 00:22:02.072 4.670 - 4.693: 17.5902% ( 134) 00:22:02.072 4.693 - 4.717: 19.0834% ( 173) 00:22:02.072 4.717 - 4.741: 22.2337% ( 365) 00:22:02.072 4.741 - 4.764: 26.5234% ( 497) 00:22:02.072 4.764 - 4.788: 30.7440% ( 489) 00:22:02.072 4.788 - 4.812: 33.6268% ( 334) 00:22:02.072 4.812 - 4.836: 35.3789% ( 203) 00:22:02.072 4.836 - 4.859: 36.4319% ( 122) 00:22:02.072 4.859 - 4.883: 36.9670% ( 62) 00:22:02.072 4.883 - 4.907: 37.3727% ( 47) 00:22:02.072 4.907 - 4.930: 37.8129% ( 51) 00:22:02.072 4.930 - 4.954: 38.1322% ( 37) 00:22:02.072 4.954 - 4.978: 38.4775% ( 40) 00:22:02.072 4.978 - 5.001: 38.7537% ( 32) 00:22:02.072 5.001 - 5.025: 38.9349% ( 21) 00:22:02.072 5.025 - 5.049: 39.1162% ( 21) 00:22:02.072 5.049 - 5.073: 39.2197% ( 12) 00:22:02.072 5.073 - 5.096: 39.3147% ( 11) 00:22:02.072 5.096 - 5.120: 39.4442% ( 15) 00:22:02.072 5.120 - 5.144: 39.8239% ( 44) 00:22:02.072 5.144 - 5.167: 42.8966% ( 356) 00:22:02.072 5.167 - 5.191: 45.6931% ( 324) 00:22:02.072 5.191 - 5.215: 50.0345% ( 503) 00:22:02.072 5.215 - 5.239: 52.1060% ( 240) 00:22:02.072 5.239 - 5.262: 53.8667% ( 204) 00:22:02.072 5.262 - 5.286: 55.3254% ( 169) 00:22:02.072 5.286 - 5.310: 57.4832% ( 250) 00:22:02.072 5.310 - 5.333: 60.1157% ( 305) 00:22:02.072 5.333 - 5.357: 64.5779% ( 517) 00:22:02.072 5.357 - 5.381: 66.6408% ( 239) 00:22:02.072 5.381 - 5.404: 68.8590% ( 257) 00:22:02.072 5.404 - 5.428: 70.9563% ( 243) 00:22:02.072 5.428 - 5.452: 72.2596% ( 151) 00:22:02.072 5.452 - 5.476: 72.8034% ( 63) 00:22:02.072 5.476 - 5.499: 73.3212% ( 60) 00:22:02.072 5.499 - 5.523: 73.5802% ( 30) 00:22:02.072 5.523 - 5.547: 76.7392% ( 366) 00:22:02.072 5.547 - 5.570: 79.8464% ( 360) 00:22:02.072 5.570 - 5.594: 86.9584% ( 824) 00:22:02.072 5.594 - 5.618: 89.6686% ( 314) 00:22:02.072 5.618 - 5.641: 91.8177% ( 249) 00:22:02.072 5.641 - 5.665: 93.6475% ( 212) 00:22:02.072 5.665 - 5.689: 94.0186% ( 43) 00:22:02.072 5.689 - 5.713: 94.1654% ( 17) 00:22:02.072 5.713 - 5.736: 94.2948% ( 15) 00:22:02.072 5.736 - 5.760: 94.3811% ( 10) 00:22:02.072 5.760 - 5.784: 94.4847% ( 12) 00:22:02.072 5.784 - 5.807: 94.5883% ( 12) 00:22:02.072 5.807 - 5.831: 94.7523% ( 19) 00:22:02.072 5.831 - 5.855: 94.9163% ( 19) 00:22:02.072 5.855 - 5.879: 95.0803% ( 19) 00:22:02.072 5.879 - 5.902: 95.1925% ( 13) 00:22:02.072 5.902 - 5.926: 95.3478% ( 18) 00:22:02.072 5.926 - 5.950: 95.4773% ( 15) 00:22:02.072 5.950 - 5.973: 95.5550% ( 9) 00:22:02.072 5.973 - 5.997: 95.6758% ( 14) 00:22:02.072 5.997 - 6.021: 95.7794% ( 12) 00:22:02.072 6.021 - 6.044: 95.8398% ( 7) 00:22:02.072 6.044 - 6.068: 95.8916% ( 6) 00:22:02.072 6.068 - 6.116: 95.9693% ( 9) 00:22:02.072 6.116 - 6.163: 96.0211% ( 6) 00:22:02.072 6.163 - 6.210: 96.0556% ( 4) 00:22:02.072 6.210 - 6.258: 96.1505% ( 11) 00:22:02.072 6.258 - 6.305: 96.2368% ( 10) 00:22:02.072 6.305 - 6.353: 96.3318% ( 11) 00:22:02.072 6.353 - 6.400: 96.4440% ( 13) 00:22:02.072 6.400 - 6.447: 96.4785% ( 4) 00:22:02.072 6.447 - 6.495: 96.6598% ( 21) 00:22:02.072 6.495 - 6.542: 96.8065% ( 17) 00:22:02.072 6.542 - 6.590: 96.8583% ( 6) 00:22:02.072 6.590 - 6.637: 96.9014% ( 5) 00:22:02.072 6.637 - 6.684: 96.9705% ( 8) 00:22:02.072 6.684 - 6.732: 97.0309% ( 7) 00:22:02.072 6.732 - 6.779: 97.0654% ( 4) 00:22:02.072 6.779 - 6.827: 97.1172% ( 6) 00:22:02.072 6.827 - 6.874: 97.4883% ( 43) 00:22:02.072 6.874 - 6.921: 98.1961% ( 82) 00:22:02.072 6.921 - 6.969: 98.6449% ( 52) 00:22:02.072 6.969 - 7.016: 98.9902% ( 40) 00:22:02.072 7.016 - 7.064: 99.1283% ( 16) 00:22:02.072 7.064 - 7.111: 99.1542% ( 3) 00:22:02.072 7.111 - 7.159: 99.1628% ( 1) 00:22:02.072 7.159 - 7.206: 99.1714% ( 1) 00:22:02.072 7.253 - 7.301: 99.1800% ( 1) 00:22:02.072 7.301 - 7.348: 99.1887% ( 1) 00:22:02.072 7.348 - 7.396: 99.1973% ( 1) 00:22:02.072 7.680 - 7.727: 99.2232% ( 3) 00:22:02.072 7.727 - 7.775: 99.2318% ( 1) 00:22:02.072 7.775 - 7.822: 99.2405% ( 1) 00:22:02.072 7.822 - 7.870: 99.2577% ( 2) 00:22:02.072 7.870 - 7.917: 99.2664% ( 1) 00:22:02.072 8.012 - 8.059: 99.2922% ( 3) 00:22:02.072 8.059 - 8.107: 99.3095% ( 2) 00:22:02.072 8.107 - 8.154: 99.3268% ( 2) 00:22:02.072 8.154 - 8.201: 99.3354% ( 1) 00:22:02.072 8.249 - 8.296: 99.3440% ( 1) 00:22:02.072 8.296 - 8.344: 99.3613% ( 2) 00:22:02.072 8.344 - 8.391: 99.3699% ( 1) 00:22:02.072 8.391 - 8.439: 99.3786% ( 1) 00:22:02.072 8.486 - 8.533: 99.3872% ( 1) 00:22:02.072 8.581 - 8.628: 99.3958% ( 1) 00:22:02.072 8.628 - 8.676: 99.4303% ( 4) 00:22:02.072 8.723 - 8.770: 99.4476% ( 2) 00:22:02.072 8.818 - 8.865: 99.4562% ( 1) 00:22:02.072 8.913 - 8.960: 99.4821% ( 3) 00:22:02.072 9.007 - 9.055: 99.4994% ( 2) 00:22:02.072 9.055 - 9.102: 99.5080% ( 1) 00:22:02.072 9.102 - 9.150: 99.5167% ( 1) 00:22:02.072 9.150 - 9.197: 99.5253% ( 1) 00:22:02.072 9.197 - 9.244: 99.5339% ( 1) 00:22:02.072 9.244 - 9.292: 99.5426% ( 1) 00:22:02.072 9.292 - 9.339: 99.5512% ( 1) 00:22:02.072 9.387 - 9.434: 99.5771% ( 3) 00:22:02.072 9.434 - 9.481: 99.6030% ( 3) 00:22:02.072 9.719 - 9.766: 99.6116% ( 1) 00:22:02.072 9.813 - 9.861: 99.6202% ( 1) 00:22:02.072 9.908 - 9.956: 99.6289% ( 1) 00:22:02.072 10.050 - 10.098: 99.6375% ( 1) 00:22:02.072 10.098 - 10.145: 99.6461% ( 1) 00:22:02.072 10.240 - 10.287: 99.6548% ( 1) 00:22:02.072 10.430 - 10.477: 99.6634% ( 1) 00:22:02.072 10.619 - 10.667: 99.6720% ( 1) 00:22:02.072 10.714 - 10.761: 99.6806% ( 1) 00:22:02.072 10.856 - 10.904: 99.6893% ( 1) 00:22:02.072 11.188 - 11.236: 99.6979% ( 1) 00:22:02.072 11.283 - 11.330: 99.7152% ( 2) 00:22:02.072 11.330 - 11.378: 99.7238% ( 1) 00:22:02.072 11.520 - 11.567: 99.7324% ( 1) 00:22:02.072 11.899 - 11.947: 99.7497% ( 2) 00:22:02.072 12.136 - 12.231: 99.7583% ( 1) 00:22:02.073 12.231 - 12.326: 99.7670% ( 1) 00:22:02.073 12.326 - 12.421: 99.7756% ( 1) 00:22:02.073 13.084 - 13.179: 99.7842% ( 1) 00:22:02.073 13.653 - 13.748: 99.8101% ( 3) 00:22:02.073 13.748 - 13.843: 99.8187% ( 1) 00:22:02.073 13.843 - 13.938: 99.8360% ( 2) 00:22:02.073 13.938 - 14.033: 99.8705% ( 4) 00:22:02.073 14.033 - 14.127: 99.8964% ( 3) 00:22:02.073 3980.705 - 4004.978: 99.9482% ( 6) 00:22:02.073 4004.978 - 4029.250: 100.0000% ( 6) 00:22:02.073 00:22:02.073 Complete histogram 00:22:02.073 ================== 00:22:02.073 Range in us Cumulative Count 00:22:02.073 2.655 - 2.667: 1.9420% ( 225) 00:22:02.073 2.667 - 2.679: 33.8339% ( 3695) 00:22:02.073 2.679 - 2.690: 53.0209% ( 2223) 00:22:02.073 2.690 - 2.702: 58.6915% ( 657) 00:22:02.073 2.702 - 2.714: 74.6763% ( 1852) 00:22:02.073 2.714 - 2.726: 90.2210% ( 1801) 00:22:02.073 2.726 - 2.738: 95.1321% ( 569) 00:22:02.073 2.738 - 2.750: 97.4797% ( 272) 00:22:02.073 2.750 - 2.761: 98.2479% ( 89) 00:22:02.073 2.761 - 2.773: 98.5154% ( 31) 00:22:02.073 2.773 - 2.785: 98.6363% ( 14) 00:22:02.073 2.785 - 2.797: 98.6794% ( 5) 00:22:02.073 2.797 - 2.809: 98.7053% ( 3) 00:22:02.073 2.844 - 2.856: 98.7140% ( 1) 00:22:02.073 2.856 - 2.868: 98.7226% ( 1) 00:22:02.073 2.868 - 2.880: 98.7312% ( 1) 00:22:02.073 2.880 - 2.8[2024-07-11 02:27:52.177713] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:02.073 92: 98.7399% ( 1) 00:22:02.073 2.892 - 2.904: 98.7485% ( 1) 00:22:02.073 2.904 - 2.916: 98.7571% ( 1) 00:22:02.073 2.916 - 2.927: 98.7658% ( 1) 00:22:02.073 2.927 - 2.939: 98.7830% ( 2) 00:22:02.073 2.951 - 2.963: 98.7916% ( 1) 00:22:02.073 2.963 - 2.975: 98.8089% ( 2) 00:22:02.073 2.987 - 2.999: 98.8175% ( 1) 00:22:02.073 2.999 - 3.010: 98.8262% ( 1) 00:22:02.073 3.081 - 3.105: 98.8348% ( 1) 00:22:02.073 3.224 - 3.247: 98.8434% ( 1) 00:22:02.073 3.295 - 3.319: 98.8521% ( 1) 00:22:02.073 3.366 - 3.390: 98.8693% ( 2) 00:22:02.073 3.390 - 3.413: 98.8780% ( 1) 00:22:02.073 3.413 - 3.437: 98.9125% ( 4) 00:22:02.073 3.437 - 3.461: 98.9297% ( 2) 00:22:02.073 3.461 - 3.484: 98.9556% ( 3) 00:22:02.073 3.484 - 3.508: 98.9643% ( 1) 00:22:02.073 3.508 - 3.532: 98.9729% ( 1) 00:22:02.073 3.532 - 3.556: 99.0074% ( 4) 00:22:02.073 3.556 - 3.579: 99.0592% ( 6) 00:22:02.073 3.579 - 3.603: 99.0678% ( 1) 00:22:02.073 3.627 - 3.650: 99.0937% ( 3) 00:22:02.073 3.650 - 3.674: 99.1024% ( 1) 00:22:02.073 3.674 - 3.698: 99.1110% ( 1) 00:22:02.073 3.698 - 3.721: 99.1283% ( 2) 00:22:02.073 4.030 - 4.053: 99.1369% ( 1) 00:22:02.073 4.196 - 4.219: 99.1542% ( 2) 00:22:02.073 4.385 - 4.409: 99.1628% ( 1) 00:22:02.073 4.480 - 4.504: 99.1714% ( 1) 00:22:02.073 4.622 - 4.646: 99.1800% ( 1) 00:22:02.073 4.930 - 4.954: 99.1887% ( 1) 00:22:02.073 4.978 - 5.001: 99.1973% ( 1) 00:22:02.073 5.499 - 5.523: 99.2059% ( 1) 00:22:02.073 5.594 - 5.618: 99.2146% ( 1) 00:22:02.073 5.736 - 5.760: 99.2232% ( 1) 00:22:02.073 5.760 - 5.784: 99.2318% ( 1) 00:22:02.073 5.855 - 5.879: 99.2491% ( 2) 00:22:02.073 6.068 - 6.116: 99.2577% ( 1) 00:22:02.073 6.210 - 6.258: 99.2664% ( 1) 00:22:02.073 6.258 - 6.305: 99.2836% ( 2) 00:22:02.073 6.637 - 6.684: 99.2922% ( 1) 00:22:02.073 6.732 - 6.779: 99.3009% ( 1) 00:22:02.073 6.827 - 6.874: 99.3095% ( 1) 00:22:02.073 6.921 - 6.969: 99.3181% ( 1) 00:22:02.073 7.064 - 7.111: 99.3268% ( 1) 00:22:02.073 7.206 - 7.253: 99.3354% ( 1) 00:22:02.073 7.917 - 7.964: 99.3440% ( 1) 00:22:02.073 10.430 - 10.477: 99.3527% ( 1) 00:22:02.073 3046.210 - 3058.347: 99.3613% ( 1) 00:22:02.073 3398.163 - 3422.436: 99.3699% ( 1) 00:22:02.073 3980.705 - 4004.978: 99.7065% ( 39) 00:22:02.073 4004.978 - 4029.250: 99.9741% ( 31) 00:22:02.073 5000.154 - 5024.427: 99.9914% ( 2) 00:22:02.073 7961.410 - 8009.956: 100.0000% ( 1) 00:22:02.073 00:22:02.073 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:22:02.073 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:22:02.073 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:22:02.073 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:22:02.073 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:22:02.331 [ 00:22:02.331 { 00:22:02.331 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:02.331 "subtype": "Discovery", 00:22:02.331 "listen_addresses": [], 00:22:02.331 "allow_any_host": true, 00:22:02.331 "hosts": [] 00:22:02.331 }, 00:22:02.331 { 00:22:02.331 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:22:02.331 "subtype": "NVMe", 00:22:02.331 "listen_addresses": [ 00:22:02.331 { 00:22:02.331 "trtype": "VFIOUSER", 00:22:02.331 "adrfam": "IPv4", 00:22:02.331 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:22:02.331 "trsvcid": "0" 00:22:02.331 } 00:22:02.331 ], 00:22:02.331 "allow_any_host": true, 00:22:02.331 "hosts": [], 00:22:02.331 "serial_number": "SPDK1", 00:22:02.331 "model_number": "SPDK bdev Controller", 00:22:02.331 "max_namespaces": 32, 00:22:02.331 "min_cntlid": 1, 00:22:02.331 "max_cntlid": 65519, 00:22:02.331 "namespaces": [ 00:22:02.331 { 00:22:02.331 "nsid": 1, 00:22:02.331 "bdev_name": "Malloc1", 00:22:02.331 "name": "Malloc1", 00:22:02.331 "nguid": "1636D2B0D3E04F90BDC1F1882697AB47", 00:22:02.331 "uuid": "1636d2b0-d3e0-4f90-bdc1-f1882697ab47" 00:22:02.331 }, 00:22:02.332 { 00:22:02.332 "nsid": 2, 00:22:02.332 "bdev_name": "Malloc3", 00:22:02.332 "name": "Malloc3", 00:22:02.332 "nguid": "BDBA23641FCF40069A9E5FD3D0715222", 00:22:02.332 "uuid": "bdba2364-1fcf-4006-9a9e-5fd3d0715222" 00:22:02.332 } 00:22:02.332 ] 00:22:02.332 }, 00:22:02.332 { 00:22:02.332 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:22:02.332 "subtype": "NVMe", 00:22:02.332 "listen_addresses": [ 00:22:02.332 { 00:22:02.332 "trtype": "VFIOUSER", 00:22:02.332 "adrfam": "IPv4", 00:22:02.332 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:22:02.332 "trsvcid": "0" 00:22:02.332 } 00:22:02.332 ], 00:22:02.332 "allow_any_host": true, 00:22:02.332 "hosts": [], 00:22:02.332 "serial_number": "SPDK2", 00:22:02.332 "model_number": "SPDK bdev Controller", 00:22:02.332 "max_namespaces": 32, 00:22:02.332 "min_cntlid": 1, 00:22:02.332 "max_cntlid": 65519, 00:22:02.332 "namespaces": [ 00:22:02.332 { 00:22:02.332 "nsid": 1, 00:22:02.332 "bdev_name": "Malloc2", 00:22:02.332 "name": "Malloc2", 00:22:02.332 "nguid": "C0464627CA7A4B37861604C5EF602DEE", 00:22:02.332 "uuid": "c0464627-ca7a-4b37-8616-04c5ef602dee" 00:22:02.332 } 00:22:02.332 ] 00:22:02.332 } 00:22:02.332 ] 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1824080 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:22:02.332 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:22:02.332 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.332 [2024-07-11 02:27:52.688602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:02.590 Malloc4 00:22:02.590 02:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:22:02.848 [2024-07-11 02:27:53.139064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:02.848 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:22:02.848 Asynchronous Event Request test 00:22:02.848 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:22:02.848 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:22:02.848 Registering asynchronous event callbacks... 00:22:02.848 Starting namespace attribute notice tests for all controllers... 00:22:02.848 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:02.848 aer_cb - Changed Namespace 00:22:02.848 Cleaning up... 00:22:03.107 [ 00:22:03.107 { 00:22:03.107 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:03.107 "subtype": "Discovery", 00:22:03.107 "listen_addresses": [], 00:22:03.107 "allow_any_host": true, 00:22:03.107 "hosts": [] 00:22:03.107 }, 00:22:03.107 { 00:22:03.107 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:22:03.107 "subtype": "NVMe", 00:22:03.107 "listen_addresses": [ 00:22:03.107 { 00:22:03.107 "trtype": "VFIOUSER", 00:22:03.107 "adrfam": "IPv4", 00:22:03.107 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:22:03.107 "trsvcid": "0" 00:22:03.107 } 00:22:03.107 ], 00:22:03.107 "allow_any_host": true, 00:22:03.107 "hosts": [], 00:22:03.107 "serial_number": "SPDK1", 00:22:03.107 "model_number": "SPDK bdev Controller", 00:22:03.107 "max_namespaces": 32, 00:22:03.107 "min_cntlid": 1, 00:22:03.107 "max_cntlid": 65519, 00:22:03.107 "namespaces": [ 00:22:03.107 { 00:22:03.107 "nsid": 1, 00:22:03.107 "bdev_name": "Malloc1", 00:22:03.107 "name": "Malloc1", 00:22:03.107 "nguid": "1636D2B0D3E04F90BDC1F1882697AB47", 00:22:03.107 "uuid": "1636d2b0-d3e0-4f90-bdc1-f1882697ab47" 00:22:03.107 }, 00:22:03.107 { 00:22:03.107 "nsid": 2, 00:22:03.107 "bdev_name": "Malloc3", 00:22:03.107 "name": "Malloc3", 00:22:03.107 "nguid": "BDBA23641FCF40069A9E5FD3D0715222", 00:22:03.107 "uuid": "bdba2364-1fcf-4006-9a9e-5fd3d0715222" 00:22:03.107 } 00:22:03.107 ] 00:22:03.107 }, 00:22:03.107 { 00:22:03.107 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:22:03.107 "subtype": "NVMe", 00:22:03.107 "listen_addresses": [ 00:22:03.107 { 00:22:03.107 "trtype": "VFIOUSER", 00:22:03.107 "adrfam": "IPv4", 00:22:03.107 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:22:03.107 "trsvcid": "0" 00:22:03.107 } 00:22:03.107 ], 00:22:03.107 "allow_any_host": true, 00:22:03.107 "hosts": [], 00:22:03.107 "serial_number": "SPDK2", 00:22:03.107 "model_number": "SPDK bdev Controller", 00:22:03.107 "max_namespaces": 32, 00:22:03.107 "min_cntlid": 1, 00:22:03.107 "max_cntlid": 65519, 00:22:03.107 "namespaces": [ 00:22:03.107 { 00:22:03.107 "nsid": 1, 00:22:03.107 "bdev_name": "Malloc2", 00:22:03.107 "name": "Malloc2", 00:22:03.107 "nguid": "C0464627CA7A4B37861604C5EF602DEE", 00:22:03.107 "uuid": "c0464627-ca7a-4b37-8616-04c5ef602dee" 00:22:03.107 }, 00:22:03.107 { 00:22:03.107 "nsid": 2, 00:22:03.107 "bdev_name": "Malloc4", 00:22:03.107 "name": "Malloc4", 00:22:03.107 "nguid": "DBD820D52235429984808EC379C910EC", 00:22:03.107 "uuid": "dbd820d5-2235-4299-8480-8ec379c910ec" 00:22:03.107 } 00:22:03.107 ] 00:22:03.107 } 00:22:03.107 ] 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1824080 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1819130 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1819130 ']' 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1819130 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1819130 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1819130' 00:22:03.107 killing process with pid 1819130 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1819130 00:22:03.107 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1819130 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1824193 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1824193' 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:22:03.372 Process pid: 1824193 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1824193 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1824193 ']' 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.372 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:22:03.372 [2024-07-11 02:27:53.702479] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:03.372 [2024-07-11 02:27:53.703462] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:03.372 [2024-07-11 02:27:53.703527] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.372 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.372 [2024-07-11 02:27:53.757920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.630 [2024-07-11 02:27:53.848812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.630 [2024-07-11 02:27:53.848870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.630 [2024-07-11 02:27:53.848887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.630 [2024-07-11 02:27:53.848900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.630 [2024-07-11 02:27:53.848912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.630 [2024-07-11 02:27:53.848978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.630 [2024-07-11 02:27:53.849069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.630 [2024-07-11 02:27:53.849124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.630 [2024-07-11 02:27:53.849120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.630 [2024-07-11 02:27:53.940820] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:03.630 [2024-07-11 02:27:53.941053] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:03.630 [2024-07-11 02:27:53.941306] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:03.630 [2024-07-11 02:27:53.941812] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:03.630 [2024-07-11 02:27:53.942066] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:22:03.630 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.630 02:27:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:22:03.630 02:27:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:22:04.563 02:27:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:22:05.129 02:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:22:05.129 02:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:22:05.129 02:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:22:05.129 02:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:22:05.129 02:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:05.387 Malloc1 00:22:05.387 02:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:22:05.646 02:27:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:22:05.905 02:27:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:22:06.162 02:27:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:22:06.162 02:27:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:22:06.162 02:27:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:22:06.420 Malloc2 00:22:06.420 02:27:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:22:06.678 02:27:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:22:06.936 02:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1824193 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1824193 ']' 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1824193 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1824193 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1824193' 00:22:07.195 killing process with pid 1824193 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1824193 00:22:07.195 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1824193 00:22:07.454 02:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:22:07.454 02:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:22:07.454 00:22:07.454 real 0m52.965s 00:22:07.454 user 3m29.491s 00:22:07.454 sys 0m4.297s 00:22:07.454 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.454 02:27:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:22:07.454 ************************************ 00:22:07.454 END TEST nvmf_vfio_user 00:22:07.454 ************************************ 00:22:07.454 02:27:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:07.454 02:27:57 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:22:07.454 02:27:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:07.454 02:27:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.454 02:27:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.454 ************************************ 00:22:07.454 START TEST nvmf_vfio_user_nvme_compliance 00:22:07.454 ************************************ 00:22:07.454 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:22:07.454 * Looking for test storage... 00:22:07.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:22:07.454 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.454 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:22:07.454 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.454 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1824656 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1824656' 00:22:07.455 Process pid: 1824656 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1824656 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1824656 ']' 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.455 02:27:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:07.455 [2024-07-11 02:27:57.831397] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:07.455 [2024-07-11 02:27:57.831503] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.455 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.714 [2024-07-11 02:27:57.894815] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:07.714 [2024-07-11 02:27:57.985015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.714 [2024-07-11 02:27:57.985079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.714 [2024-07-11 02:27:57.985095] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.714 [2024-07-11 02:27:57.985109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.714 [2024-07-11 02:27:57.985121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.714 [2024-07-11 02:27:57.988533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.714 [2024-07-11 02:27:57.988573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.714 [2024-07-11 02:27:57.988583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.714 02:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.714 02:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:22:07.714 02:27:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:09.088 malloc0 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.088 02:27:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:22:09.088 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.088 00:22:09.088 00:22:09.088 CUnit - A unit testing framework for C - Version 2.1-3 00:22:09.088 http://cunit.sourceforge.net/ 00:22:09.088 00:22:09.088 00:22:09.088 Suite: nvme_compliance 00:22:09.088 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-11 02:27:59.325144] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:09.088 [2024-07-11 02:27:59.326648] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:22:09.088 [2024-07-11 02:27:59.326676] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:22:09.088 [2024-07-11 02:27:59.326691] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:22:09.088 [2024-07-11 02:27:59.331179] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:09.088 passed 00:22:09.088 Test: admin_identify_ctrlr_verify_fused ...[2024-07-11 02:27:59.427897] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:09.088 [2024-07-11 02:27:59.433938] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:09.088 passed 00:22:09.346 Test: admin_identify_ns ...[2024-07-11 02:27:59.532585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:09.346 [2024-07-11 02:27:59.593541] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:22:09.346 [2024-07-11 02:27:59.601540] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:22:09.346 [2024-07-11 02:27:59.622694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:09.346 passed 00:22:09.346 Test: admin_get_features_mandatory_features ...[2024-07-11 02:27:59.721087] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:09.346 [2024-07-11 02:27:59.724111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:09.346 passed 00:22:09.605 Test: admin_get_features_optional_features ...[2024-07-11 02:27:59.822770] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:09.605 [2024-07-11 02:27:59.825792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:09.605 passed 00:22:09.605 Test: admin_set_features_number_of_queues ...[2024-07-11 02:27:59.925505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:09.863 [2024-07-11 02:28:00.032721] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:09.863 passed 00:22:09.863 Test: admin_get_log_page_mandatory_logs ...[2024-07-11 02:28:00.129041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:09.863 [2024-07-11 02:28:00.132065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:09.863 passed 00:22:09.863 Test: admin_get_log_page_with_lpo ...[2024-07-11 02:28:00.231199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:10.121 [2024-07-11 02:28:00.299531] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:22:10.121 [2024-07-11 02:28:00.312618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:10.121 passed 00:22:10.121 Test: fabric_property_get ...[2024-07-11 02:28:00.408801] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:10.121 [2024-07-11 02:28:00.410131] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:22:10.121 [2024-07-11 02:28:00.411824] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:10.121 passed 00:22:10.121 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-11 02:28:00.510443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:10.121 [2024-07-11 02:28:00.511790] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:22:10.121 [2024-07-11 02:28:00.513464] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:10.378 passed 00:22:10.378 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-11 02:28:00.611596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:10.378 [2024-07-11 02:28:00.697521] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:22:10.378 [2024-07-11 02:28:00.713524] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:22:10.378 [2024-07-11 02:28:00.718699] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:10.378 passed 00:22:10.635 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-11 02:28:00.816866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:10.635 [2024-07-11 02:28:00.818226] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:22:10.635 [2024-07-11 02:28:00.819900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:10.635 passed 00:22:10.635 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-11 02:28:00.917569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:10.635 [2024-07-11 02:28:00.992521] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:22:10.635 [2024-07-11 02:28:01.016529] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:22:10.635 [2024-07-11 02:28:01.021668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:10.893 passed 00:22:10.893 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-11 02:28:01.119181] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:10.893 [2024-07-11 02:28:01.120552] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:22:10.893 [2024-07-11 02:28:01.120597] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:22:10.893 [2024-07-11 02:28:01.122208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:10.893 passed 00:22:10.893 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-11 02:28:01.220575] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:10.893 [2024-07-11 02:28:01.314528] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:22:11.151 [2024-07-11 02:28:01.322538] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:22:11.151 [2024-07-11 02:28:01.330538] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:22:11.151 [2024-07-11 02:28:01.338522] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:22:11.151 [2024-07-11 02:28:01.367656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:11.151 passed 00:22:11.151 Test: admin_create_io_sq_verify_pc ...[2024-07-11 02:28:01.464740] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:11.151 [2024-07-11 02:28:01.480537] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:22:11.151 [2024-07-11 02:28:01.498303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:11.151 passed 00:22:11.408 Test: admin_create_io_qp_max_qps ...[2024-07-11 02:28:01.597010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:12.342 [2024-07-11 02:28:02.697537] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:22:12.906 [2024-07-11 02:28:03.081489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:12.907 passed 00:22:12.907 Test: admin_create_io_sq_shared_cq ...[2024-07-11 02:28:03.178694] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:12.907 [2024-07-11 02:28:03.311533] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:22:13.165 [2024-07-11 02:28:03.348634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:13.165 passed 00:22:13.165 00:22:13.165 Run Summary: Type Total Ran Passed Failed Inactive 00:22:13.165 suites 1 1 n/a 0 0 00:22:13.165 tests 18 18 18 0 0 00:22:13.165 asserts 360 360 360 0 n/a 00:22:13.165 00:22:13.165 Elapsed time = 1.693 seconds 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1824656 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1824656 ']' 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1824656 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1824656 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1824656' 00:22:13.165 killing process with pid 1824656 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1824656 00:22:13.165 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1824656 00:22:13.423 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:22:13.423 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:22:13.423 00:22:13.423 real 0m5.920s 00:22:13.423 user 0m16.694s 00:22:13.423 sys 0m0.562s 00:22:13.423 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.423 02:28:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:13.423 ************************************ 00:22:13.423 END TEST nvmf_vfio_user_nvme_compliance 00:22:13.423 ************************************ 00:22:13.423 02:28:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:13.423 02:28:03 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:22:13.423 02:28:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:13.423 02:28:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.423 02:28:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.423 ************************************ 00:22:13.423 START TEST nvmf_vfio_user_fuzz 00:22:13.423 ************************************ 00:22:13.423 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:22:13.423 * Looking for test storage... 00:22:13.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1825220 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1825220' 00:22:13.424 Process pid: 1825220 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1825220 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1825220 ']' 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.424 02:28:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:13.681 02:28:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.681 02:28:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:22:13.681 02:28:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:15.055 malloc0 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:22:15.055 02:28:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:22:47.215 Fuzzing completed. Shutting down the fuzz application 00:22:47.215 00:22:47.215 Dumping successful admin opcodes: 00:22:47.215 8, 9, 10, 24, 00:22:47.215 Dumping successful io opcodes: 00:22:47.215 0, 00:22:47.215 NS: 0x200003a1ef00 I/O qp, Total commands completed: 511160, total successful commands: 1960, random_seed: 3173277760 00:22:47.215 NS: 0x200003a1ef00 admin qp, Total commands completed: 98186, total successful commands: 801, random_seed: 3077134976 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1825220 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1825220 ']' 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1825220 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1825220 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1825220' 00:22:47.215 killing process with pid 1825220 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1825220 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1825220 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:22:47.215 00:22:47.215 real 0m32.102s 00:22:47.215 user 0m32.752s 00:22:47.215 sys 0m24.535s 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:47.215 02:28:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:47.215 ************************************ 00:22:47.215 END TEST nvmf_vfio_user_fuzz 00:22:47.215 ************************************ 00:22:47.215 02:28:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:47.215 02:28:35 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:22:47.215 02:28:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:47.215 02:28:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.215 02:28:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:47.215 ************************************ 00:22:47.215 START TEST nvmf_host_management 00:22:47.215 ************************************ 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:22:47.215 * Looking for test storage... 00:22:47.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:22:47.215 02:28:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.215 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:47.216 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:47.216 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:47.216 Found net devices under 0000:08:00.0: cvl_0_0 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:47.216 Found net devices under 0000:08:00.1: cvl_0_1 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:22:47.216 00:22:47.216 --- 10.0.0.2 ping statistics --- 00:22:47.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.216 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:22:47.216 00:22:47.216 --- 10.0.0.1 ping statistics --- 00:22:47.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.216 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1829616 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1829616 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1829616 ']' 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.216 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:47.216 [2024-07-11 02:28:37.614143] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:47.216 [2024-07-11 02:28:37.614245] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.474 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.474 [2024-07-11 02:28:37.680904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.474 [2024-07-11 02:28:37.773196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.474 [2024-07-11 02:28:37.773257] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.474 [2024-07-11 02:28:37.773283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.474 [2024-07-11 02:28:37.773303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.474 [2024-07-11 02:28:37.773328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.474 [2024-07-11 02:28:37.773396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.474 [2024-07-11 02:28:37.774535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.474 [2024-07-11 02:28:37.774616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:47.474 [2024-07-11 02:28:37.774650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.732 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:47.733 [2024-07-11 02:28:37.925270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:47.733 Malloc0 00:22:47.733 [2024-07-11 02:28:37.987464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:47.733 02:28:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1829676 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1829676 /var/tmp/bdevperf.sock 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1829676 ']' 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.733 { 00:22:47.733 "params": { 00:22:47.733 "name": "Nvme$subsystem", 00:22:47.733 "trtype": "$TEST_TRANSPORT", 00:22:47.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.733 "adrfam": "ipv4", 00:22:47.733 "trsvcid": "$NVMF_PORT", 00:22:47.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.733 "hdgst": ${hdgst:-false}, 00:22:47.733 "ddgst": ${ddgst:-false} 00:22:47.733 }, 00:22:47.733 "method": "bdev_nvme_attach_controller" 00:22:47.733 } 00:22:47.733 EOF 00:22:47.733 )") 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:22:47.733 02:28:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:47.733 "params": { 00:22:47.733 "name": "Nvme0", 00:22:47.733 "trtype": "tcp", 00:22:47.733 "traddr": "10.0.0.2", 00:22:47.733 "adrfam": "ipv4", 00:22:47.733 "trsvcid": "4420", 00:22:47.733 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:47.733 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:47.733 "hdgst": false, 00:22:47.733 "ddgst": false 00:22:47.733 }, 00:22:47.733 "method": "bdev_nvme_attach_controller" 00:22:47.733 }' 00:22:47.733 [2024-07-11 02:28:38.063331] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:47.733 [2024-07-11 02:28:38.063416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1829676 ] 00:22:47.733 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.733 [2024-07-11 02:28:38.124857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.991 [2024-07-11 02:28:38.212640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.991 Running I/O for 10 seconds... 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:22:48.249 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.508 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:48.508 [2024-07-11 02:28:38.790137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.508 [2024-07-11 02:28:38.790396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 [2024-07-11 02:28:38.790681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5b00 is same with the state(5) to be set 00:22:48.509 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.509 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:22:48.509 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.509 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:48.509 [2024-07-11 02:28:38.798700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.798744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.798776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.798795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.798815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.798831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.798850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.798868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.798892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.798909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.798928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.798945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.798963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.798979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.798997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.509 [2024-07-11 02:28:38.799786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.509 [2024-07-11 02:28:38.799804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.799821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.799838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.799855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.799872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.799889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.799906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.799923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.799941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.799957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.799975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.799991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.800949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.510 [2024-07-11 02:28:38.800966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.801004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:48.510 [2024-07-11 02:28:38.801066] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f9ec0 was disconnected and freed. reset controller. 00:22:48.510 [2024-07-11 02:28:38.801139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.510 [2024-07-11 02:28:38.801162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.801180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.510 [2024-07-11 02:28:38.801195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.801216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.510 [2024-07-11 02:28:38.801232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.801249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.510 [2024-07-11 02:28:38.801265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.510 [2024-07-11 02:28:38.801280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ffba0 is same with the state(5) to be set 00:22:48.510 02:28:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.511 02:28:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:22:48.511 [2024-07-11 02:28:38.802538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.511 task offset: 73728 on job bdev=Nvme0n1 fails 00:22:48.511 00:22:48.511 Latency(us) 00:22:48.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.511 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.511 Job: Nvme0n1 ended in about 0.42 seconds with error 00:22:48.511 Verification LBA range: start 0x0 length 0x400 00:22:48.511 Nvme0n1 : 0.42 1376.34 86.02 152.93 0.00 40438.01 3203.98 40195.41 00:22:48.511 =================================================================================================================== 00:22:48.511 Total : 1376.34 86.02 152.93 0.00 40438.01 3203.98 40195.41 00:22:48.511 [2024-07-11 02:28:38.804743] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:48.511 [2024-07-11 02:28:38.804774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ffba0 (9): Bad file descriptor 00:22:48.511 [2024-07-11 02:28:38.897673] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1829676 00:22:49.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1829676) - No such process 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:49.445 { 00:22:49.445 "params": { 00:22:49.445 "name": "Nvme$subsystem", 00:22:49.445 "trtype": "$TEST_TRANSPORT", 00:22:49.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:49.445 "adrfam": "ipv4", 00:22:49.445 "trsvcid": "$NVMF_PORT", 00:22:49.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:49.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:49.445 "hdgst": ${hdgst:-false}, 00:22:49.445 "ddgst": ${ddgst:-false} 00:22:49.445 }, 00:22:49.445 "method": "bdev_nvme_attach_controller" 00:22:49.445 } 00:22:49.445 EOF 00:22:49.445 )") 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:22:49.445 02:28:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:49.445 "params": { 00:22:49.445 "name": "Nvme0", 00:22:49.445 "trtype": "tcp", 00:22:49.445 "traddr": "10.0.0.2", 00:22:49.446 "adrfam": "ipv4", 00:22:49.446 "trsvcid": "4420", 00:22:49.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:49.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:49.446 "hdgst": false, 00:22:49.446 "ddgst": false 00:22:49.446 }, 00:22:49.446 "method": "bdev_nvme_attach_controller" 00:22:49.446 }' 00:22:49.446 [2024-07-11 02:28:39.853897] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:49.446 [2024-07-11 02:28:39.853999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1829942 ] 00:22:49.704 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.704 [2024-07-11 02:28:39.919101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.704 [2024-07-11 02:28:40.006023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.962 Running I/O for 1 seconds... 00:22:50.896 00:22:50.896 Latency(us) 00:22:50.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.896 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:50.896 Verification LBA range: start 0x0 length 0x400 00:22:50.896 Nvme0n1 : 1.01 1481.97 92.62 0.00 0.00 42150.44 3179.71 38059.43 00:22:50.896 =================================================================================================================== 00:22:50.896 Total : 1481.97 92.62 0.00 0.00 42150.44 3179.71 38059.43 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.155 rmmod nvme_tcp 00:22:51.155 rmmod nvme_fabrics 00:22:51.155 rmmod nvme_keyring 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1829616 ']' 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1829616 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1829616 ']' 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1829616 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1829616 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1829616' 00:22:51.155 killing process with pid 1829616 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1829616 00:22:51.155 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1829616 00:22:51.415 [2024-07-11 02:28:41.585075] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:22:51.415 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:51.415 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:51.415 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:51.415 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.415 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.415 02:28:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.415 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.415 02:28:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.322 02:28:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.322 02:28:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:53.322 00:22:53.322 real 0m7.838s 00:22:53.322 user 0m17.805s 00:22:53.322 sys 0m2.340s 00:22:53.322 02:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.322 02:28:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:53.322 ************************************ 00:22:53.322 END TEST nvmf_host_management 00:22:53.322 ************************************ 00:22:53.322 02:28:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:53.322 02:28:43 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:22:53.322 02:28:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.322 02:28:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.322 02:28:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.322 ************************************ 00:22:53.322 START TEST nvmf_lvol 00:22:53.322 ************************************ 00:22:53.322 02:28:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:22:53.580 * Looking for test storage... 00:22:53.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.580 02:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.580 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:22:53.580 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.580 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.580 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.581 02:28:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.483 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:55.484 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:55.484 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:55.484 Found net devices under 0000:08:00.0: cvl_0_0 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:55.484 Found net devices under 0000:08:00.1: cvl_0_1 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:22:55.484 00:22:55.484 --- 10.0.0.2 ping statistics --- 00:22:55.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.484 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:22:55.484 00:22:55.484 --- 10.0.0.1 ping statistics --- 00:22:55.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.484 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1831729 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1831729 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1831729 ']' 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:55.484 [2024-07-11 02:28:45.582134] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:22:55.484 [2024-07-11 02:28:45.582223] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.484 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.484 [2024-07-11 02:28:45.646643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:55.484 [2024-07-11 02:28:45.733864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.484 [2024-07-11 02:28:45.733919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.484 [2024-07-11 02:28:45.733940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.484 [2024-07-11 02:28:45.733954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.484 [2024-07-11 02:28:45.733966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.484 [2024-07-11 02:28:45.734272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.484 [2024-07-11 02:28:45.734301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.484 [2024-07-11 02:28:45.734304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.484 02:28:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:55.742 [2024-07-11 02:28:46.134983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.742 02:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:56.309 02:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:22:56.309 02:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:56.568 02:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:22:56.568 02:28:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:22:56.826 02:28:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:22:57.084 02:28:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ee2cabce-8dc2-427e-aa95-ee5c50f03dbf 00:22:57.084 02:28:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ee2cabce-8dc2-427e-aa95-ee5c50f03dbf lvol 20 00:22:57.341 02:28:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=75a4a17e-4b53-4955-8178-211752f2b133 00:22:57.341 02:28:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:57.598 02:28:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 75a4a17e-4b53-4955-8178-211752f2b133 00:22:58.163 02:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:58.163 [2024-07-11 02:28:48.573825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.420 02:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:58.678 02:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1833012 00:22:58.678 02:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:22:58.678 02:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:22:58.678 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.611 02:28:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 75a4a17e-4b53-4955-8178-211752f2b133 MY_SNAPSHOT 00:22:59.868 02:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8004ad9d-a1e2-4080-8c14-65657aa5be5e 00:22:59.868 02:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 75a4a17e-4b53-4955-8178-211752f2b133 30 00:23:00.433 02:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8004ad9d-a1e2-4080-8c14-65657aa5be5e MY_CLONE 00:23:00.691 02:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=83eb8e49-a3d3-44d5-a261-e7ce27d7f323 00:23:00.691 02:28:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 83eb8e49-a3d3-44d5-a261-e7ce27d7f323 00:23:01.255 02:28:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1833012 00:23:09.386 Initializing NVMe Controllers 00:23:09.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:23:09.386 Controller IO queue size 128, less than required. 00:23:09.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:09.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:23:09.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:23:09.386 Initialization complete. Launching workers. 00:23:09.386 ======================================================== 00:23:09.386 Latency(us) 00:23:09.386 Device Information : IOPS MiB/s Average min max 00:23:09.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9756.10 38.11 13126.34 2042.53 144937.94 00:23:09.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9363.90 36.58 13675.85 2128.29 70777.57 00:23:09.386 ======================================================== 00:23:09.386 Total : 19120.00 74.69 13395.46 2042.53 144937.94 00:23:09.386 00:23:09.386 02:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:09.386 02:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 75a4a17e-4b53-4955-8178-211752f2b133 00:23:09.644 02:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee2cabce-8dc2-427e-aa95-ee5c50f03dbf 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.902 rmmod nvme_tcp 00:23:09.902 rmmod nvme_fabrics 00:23:09.902 rmmod nvme_keyring 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1831729 ']' 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1831729 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1831729 ']' 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1831729 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1831729 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:09.902 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1831729' 00:23:09.902 killing process with pid 1831729 00:23:09.903 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1831729 00:23:09.903 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1831729 00:23:10.162 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.162 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.162 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.162 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.162 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.163 02:29:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.163 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.163 02:29:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.074 02:29:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.074 00:23:12.074 real 0m18.738s 00:23:12.074 user 1m5.774s 00:23:12.074 sys 0m5.225s 00:23:12.074 02:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.074 02:29:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:12.074 ************************************ 00:23:12.074 END TEST nvmf_lvol 00:23:12.074 ************************************ 00:23:12.074 02:29:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:12.074 02:29:02 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:12.074 02:29:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:12.074 02:29:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.074 02:29:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.369 ************************************ 00:23:12.369 START TEST nvmf_lvs_grow 00:23:12.369 ************************************ 00:23:12.369 02:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:12.369 * Looking for test storage... 00:23:12.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:12.369 02:29:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:23:12.370 02:29:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:13.772 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:13.772 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:13.772 Found net devices under 0000:08:00.0: cvl_0_0 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:13.772 Found net devices under 0000:08:00.1: cvl_0_1 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.772 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:14.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:23:14.031 00:23:14.031 --- 10.0.0.2 ping statistics --- 00:23:14.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.031 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:23:14.031 00:23:14.031 --- 10.0.0.1 ping statistics --- 00:23:14.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.031 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1836035 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1836035 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1836035 ']' 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.031 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:14.031 [2024-07-11 02:29:04.333718] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:14.032 [2024-07-11 02:29:04.333810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.032 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.032 [2024-07-11 02:29:04.397012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.290 [2024-07-11 02:29:04.483260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.290 [2024-07-11 02:29:04.483322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.290 [2024-07-11 02:29:04.483338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.290 [2024-07-11 02:29:04.483351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.290 [2024-07-11 02:29:04.483364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.290 [2024-07-11 02:29:04.483393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.290 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.290 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:23:14.290 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.290 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.290 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:14.290 02:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.290 02:29:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:14.549 [2024-07-11 02:29:04.886371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:14.549 ************************************ 00:23:14.549 START TEST lvs_grow_clean 00:23:14.549 ************************************ 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:23:14.549 02:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:15.116 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:15.116 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:15.375 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:15.375 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:15.375 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:15.634 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:15.634 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:15.634 02:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e8b49a21-cfa1-4741-9082-7a761b052e9d lvol 150 00:23:15.892 02:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=81e27c66-4285-431c-8cd0-34c799b5092a 00:23:15.892 02:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:23:15.892 02:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:16.150 [2024-07-11 02:29:06.436278] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:16.150 [2024-07-11 02:29:06.436350] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:16.150 true 00:23:16.150 02:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:16.150 02:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:16.408 02:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:16.408 02:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:16.667 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 81e27c66-4285-431c-8cd0-34c799b5092a 00:23:16.925 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.183 [2024-07-11 02:29:07.535632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.183 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1836383 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1836383 /var/tmp/bdevperf.sock 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1836383 ']' 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.442 02:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:23:17.442 [2024-07-11 02:29:07.838729] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:17.442 [2024-07-11 02:29:07.838820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836383 ] 00:23:17.442 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.701 [2024-07-11 02:29:07.892757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.701 [2024-07-11 02:29:07.980251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.701 02:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.701 02:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:23:17.701 02:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:18.266 Nvme0n1 00:23:18.266 02:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:18.266 [ 00:23:18.266 { 00:23:18.266 "name": "Nvme0n1", 00:23:18.266 "aliases": [ 00:23:18.266 "81e27c66-4285-431c-8cd0-34c799b5092a" 00:23:18.266 ], 00:23:18.266 "product_name": "NVMe disk", 00:23:18.266 "block_size": 4096, 00:23:18.266 "num_blocks": 38912, 00:23:18.266 "uuid": "81e27c66-4285-431c-8cd0-34c799b5092a", 00:23:18.266 "assigned_rate_limits": { 00:23:18.266 "rw_ios_per_sec": 0, 00:23:18.266 "rw_mbytes_per_sec": 0, 00:23:18.266 "r_mbytes_per_sec": 0, 00:23:18.266 "w_mbytes_per_sec": 0 00:23:18.266 }, 00:23:18.266 "claimed": false, 00:23:18.266 "zoned": false, 00:23:18.266 "supported_io_types": { 00:23:18.266 "read": true, 00:23:18.266 "write": true, 00:23:18.266 "unmap": true, 00:23:18.266 "flush": true, 00:23:18.266 "reset": true, 00:23:18.266 "nvme_admin": true, 00:23:18.266 "nvme_io": true, 00:23:18.266 "nvme_io_md": false, 00:23:18.266 "write_zeroes": true, 00:23:18.266 "zcopy": false, 00:23:18.266 "get_zone_info": false, 00:23:18.266 "zone_management": false, 00:23:18.266 "zone_append": false, 00:23:18.266 "compare": true, 00:23:18.266 "compare_and_write": true, 00:23:18.266 "abort": true, 00:23:18.266 "seek_hole": false, 00:23:18.266 "seek_data": false, 00:23:18.266 "copy": true, 00:23:18.266 "nvme_iov_md": false 00:23:18.266 }, 00:23:18.266 "memory_domains": [ 00:23:18.266 { 00:23:18.266 "dma_device_id": "system", 00:23:18.266 "dma_device_type": 1 00:23:18.266 } 00:23:18.266 ], 00:23:18.266 "driver_specific": { 00:23:18.266 "nvme": [ 00:23:18.266 { 00:23:18.266 "trid": { 00:23:18.266 "trtype": "TCP", 00:23:18.266 "adrfam": "IPv4", 00:23:18.266 "traddr": "10.0.0.2", 00:23:18.266 "trsvcid": "4420", 00:23:18.266 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:18.266 }, 00:23:18.266 "ctrlr_data": { 00:23:18.266 "cntlid": 1, 00:23:18.266 "vendor_id": "0x8086", 00:23:18.266 "model_number": "SPDK bdev Controller", 00:23:18.266 "serial_number": "SPDK0", 00:23:18.266 "firmware_revision": "24.09", 00:23:18.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.266 "oacs": { 00:23:18.266 "security": 0, 00:23:18.266 "format": 0, 00:23:18.266 "firmware": 0, 00:23:18.266 "ns_manage": 0 00:23:18.266 }, 00:23:18.266 "multi_ctrlr": true, 00:23:18.266 "ana_reporting": false 00:23:18.266 }, 00:23:18.266 "vs": { 00:23:18.266 "nvme_version": "1.3" 00:23:18.266 }, 00:23:18.266 "ns_data": { 00:23:18.266 "id": 1, 00:23:18.266 "can_share": true 00:23:18.266 } 00:23:18.266 } 00:23:18.266 ], 00:23:18.266 "mp_policy": "active_passive" 00:23:18.266 } 00:23:18.266 } 00:23:18.266 ] 00:23:18.266 02:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1836482 00:23:18.266 02:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:18.266 02:29:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.524 Running I/O for 10 seconds... 00:23:19.479 Latency(us) 00:23:19.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:19.479 Nvme0n1 : 1.00 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:23:19.479 =================================================================================================================== 00:23:19.479 Total : 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:23:19.479 00:23:20.414 02:29:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:20.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:20.414 Nvme0n1 : 2.00 14003.50 54.70 0.00 0.00 0.00 0.00 0.00 00:23:20.414 =================================================================================================================== 00:23:20.414 Total : 14003.50 54.70 0.00 0.00 0.00 0.00 0.00 00:23:20.414 00:23:20.679 true 00:23:20.679 02:29:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:20.679 02:29:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:20.941 02:29:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:20.941 02:29:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:20.941 02:29:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1836482 00:23:21.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:21.507 Nvme0n1 : 3.00 14098.67 55.07 0.00 0.00 0.00 0.00 0.00 00:23:21.507 =================================================================================================================== 00:23:21.507 Total : 14098.67 55.07 0.00 0.00 0.00 0.00 0.00 00:23:21.507 00:23:22.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:22.441 Nvme0n1 : 4.00 14162.25 55.32 0.00 0.00 0.00 0.00 0.00 00:23:22.441 =================================================================================================================== 00:23:22.441 Total : 14162.25 55.32 0.00 0.00 0.00 0.00 0.00 00:23:22.441 00:23:23.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:23.374 Nvme0n1 : 5.00 14214.20 55.52 0.00 0.00 0.00 0.00 0.00 00:23:23.374 =================================================================================================================== 00:23:23.374 Total : 14214.20 55.52 0.00 0.00 0.00 0.00 0.00 00:23:23.374 00:23:24.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:24.747 Nvme0n1 : 6.00 14247.83 55.66 0.00 0.00 0.00 0.00 0.00 00:23:24.747 =================================================================================================================== 00:23:24.747 Total : 14247.83 55.66 0.00 0.00 0.00 0.00 0.00 00:23:24.747 00:23:25.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:25.681 Nvme0n1 : 7.00 14165.29 55.33 0.00 0.00 0.00 0.00 0.00 00:23:25.681 =================================================================================================================== 00:23:25.681 Total : 14165.29 55.33 0.00 0.00 0.00 0.00 0.00 00:23:25.681 00:23:26.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:26.613 Nvme0n1 : 8.00 14098.62 55.07 0.00 0.00 0.00 0.00 0.00 00:23:26.613 =================================================================================================================== 00:23:26.613 Total : 14098.62 55.07 0.00 0.00 0.00 0.00 0.00 00:23:26.613 00:23:27.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:27.547 Nvme0n1 : 9.00 14044.11 54.86 0.00 0.00 0.00 0.00 0.00 00:23:27.547 =================================================================================================================== 00:23:27.547 Total : 14044.11 54.86 0.00 0.00 0.00 0.00 0.00 00:23:27.547 00:23:28.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:28.480 Nvme0n1 : 10.00 13998.90 54.68 0.00 0.00 0.00 0.00 0.00 00:23:28.480 =================================================================================================================== 00:23:28.480 Total : 13998.90 54.68 0.00 0.00 0.00 0.00 0.00 00:23:28.480 00:23:28.480 00:23:28.480 Latency(us) 00:23:28.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:28.480 Nvme0n1 : 10.01 13996.90 54.68 0.00 0.00 9135.12 2803.48 17864.63 00:23:28.480 =================================================================================================================== 00:23:28.480 Total : 13996.90 54.68 0.00 0.00 9135.12 2803.48 17864.63 00:23:28.480 0 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1836383 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1836383 ']' 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1836383 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1836383 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1836383' 00:23:28.480 killing process with pid 1836383 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1836383 00:23:28.480 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.480 00:23:28.480 Latency(us) 00:23:28.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.480 =================================================================================================================== 00:23:28.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.480 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1836383 00:23:28.739 02:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:28.997 02:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:29.255 02:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:29.255 02:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:23:29.512 02:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:23:29.512 02:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:23:29.512 02:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:30.074 [2024-07-11 02:29:20.191579] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:23:30.074 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:30.331 request: 00:23:30.331 { 00:23:30.331 "uuid": "e8b49a21-cfa1-4741-9082-7a761b052e9d", 00:23:30.331 "method": "bdev_lvol_get_lvstores", 00:23:30.331 "req_id": 1 00:23:30.331 } 00:23:30.331 Got JSON-RPC error response 00:23:30.331 response: 00:23:30.331 { 00:23:30.331 "code": -19, 00:23:30.331 "message": "No such device" 00:23:30.331 } 00:23:30.331 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:23:30.331 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:30.331 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:30.331 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:30.331 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:30.587 aio_bdev 00:23:30.587 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 81e27c66-4285-431c-8cd0-34c799b5092a 00:23:30.587 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=81e27c66-4285-431c-8cd0-34c799b5092a 00:23:30.587 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:30.587 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:23:30.587 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:30.587 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:30.587 02:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:30.844 02:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 81e27c66-4285-431c-8cd0-34c799b5092a -t 2000 00:23:31.102 [ 00:23:31.102 { 00:23:31.102 "name": "81e27c66-4285-431c-8cd0-34c799b5092a", 00:23:31.102 "aliases": [ 00:23:31.102 "lvs/lvol" 00:23:31.102 ], 00:23:31.102 "product_name": "Logical Volume", 00:23:31.102 "block_size": 4096, 00:23:31.102 "num_blocks": 38912, 00:23:31.102 "uuid": "81e27c66-4285-431c-8cd0-34c799b5092a", 00:23:31.102 "assigned_rate_limits": { 00:23:31.102 "rw_ios_per_sec": 0, 00:23:31.102 "rw_mbytes_per_sec": 0, 00:23:31.102 "r_mbytes_per_sec": 0, 00:23:31.102 "w_mbytes_per_sec": 0 00:23:31.102 }, 00:23:31.102 "claimed": false, 00:23:31.102 "zoned": false, 00:23:31.102 "supported_io_types": { 00:23:31.102 "read": true, 00:23:31.102 "write": true, 00:23:31.102 "unmap": true, 00:23:31.102 "flush": false, 00:23:31.102 "reset": true, 00:23:31.102 "nvme_admin": false, 00:23:31.102 "nvme_io": false, 00:23:31.102 "nvme_io_md": false, 00:23:31.102 "write_zeroes": true, 00:23:31.102 "zcopy": false, 00:23:31.102 "get_zone_info": false, 00:23:31.102 "zone_management": false, 00:23:31.102 "zone_append": false, 00:23:31.102 "compare": false, 00:23:31.102 "compare_and_write": false, 00:23:31.102 "abort": false, 00:23:31.102 "seek_hole": true, 00:23:31.102 "seek_data": true, 00:23:31.102 "copy": false, 00:23:31.102 "nvme_iov_md": false 00:23:31.102 }, 00:23:31.102 "driver_specific": { 00:23:31.102 "lvol": { 00:23:31.102 "lvol_store_uuid": "e8b49a21-cfa1-4741-9082-7a761b052e9d", 00:23:31.102 "base_bdev": "aio_bdev", 00:23:31.102 "thin_provision": false, 00:23:31.102 "num_allocated_clusters": 38, 00:23:31.102 "snapshot": false, 00:23:31.102 "clone": false, 00:23:31.102 "esnap_clone": false 00:23:31.102 } 00:23:31.102 } 00:23:31.102 } 00:23:31.102 ] 00:23:31.102 02:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:23:31.102 02:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:31.102 02:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:23:31.360 02:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:23:31.360 02:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:31.360 02:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:23:31.617 02:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:23:31.617 02:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 81e27c66-4285-431c-8cd0-34c799b5092a 00:23:32.181 02:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8b49a21-cfa1-4741-9082-7a761b052e9d 00:23:32.439 02:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:32.697 02:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:23:32.697 00:23:32.697 real 0m18.035s 00:23:32.697 user 0m16.645s 00:23:32.697 sys 0m2.234s 00:23:32.697 02:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:32.697 02:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:23:32.697 ************************************ 00:23:32.697 END TEST lvs_grow_clean 00:23:32.697 ************************************ 00:23:32.697 02:29:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:23:32.697 02:29:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:23:32.697 02:29:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:32.697 02:29:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:32.697 02:29:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:32.697 ************************************ 00:23:32.697 START TEST lvs_grow_dirty 00:23:32.697 ************************************ 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:23:32.697 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:32.989 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:32.989 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:33.274 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:33.274 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:33.274 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:33.531 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:33.531 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:33.531 02:29:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 lvol 150 00:23:34.095 02:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb 00:23:34.095 02:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:23:34.095 02:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:34.095 [2024-07-11 02:29:24.512327] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:34.095 [2024-07-11 02:29:24.512399] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:34.352 true 00:23:34.352 02:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:34.352 02:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:34.609 02:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:34.609 02:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:34.866 02:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb 00:23:35.124 02:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:35.382 [2024-07-11 02:29:25.703972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.382 02:29:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1838134 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1838134 /var/tmp/bdevperf.sock 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1838134 ']' 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.640 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:35.898 [2024-07-11 02:29:26.064359] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:35.898 [2024-07-11 02:29:26.064467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838134 ] 00:23:35.898 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.898 [2024-07-11 02:29:26.125123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.898 [2024-07-11 02:29:26.212546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.898 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.898 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:23:35.898 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:36.464 Nvme0n1 00:23:36.464 02:29:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:36.723 [ 00:23:36.723 { 00:23:36.723 "name": "Nvme0n1", 00:23:36.723 "aliases": [ 00:23:36.723 "4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb" 00:23:36.723 ], 00:23:36.723 "product_name": "NVMe disk", 00:23:36.723 "block_size": 4096, 00:23:36.723 "num_blocks": 38912, 00:23:36.723 "uuid": "4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb", 00:23:36.723 "assigned_rate_limits": { 00:23:36.723 "rw_ios_per_sec": 0, 00:23:36.723 "rw_mbytes_per_sec": 0, 00:23:36.723 "r_mbytes_per_sec": 0, 00:23:36.723 "w_mbytes_per_sec": 0 00:23:36.723 }, 00:23:36.723 "claimed": false, 00:23:36.723 "zoned": false, 00:23:36.723 "supported_io_types": { 00:23:36.723 "read": true, 00:23:36.723 "write": true, 00:23:36.723 "unmap": true, 00:23:36.723 "flush": true, 00:23:36.723 "reset": true, 00:23:36.723 "nvme_admin": true, 00:23:36.723 "nvme_io": true, 00:23:36.723 "nvme_io_md": false, 00:23:36.723 "write_zeroes": true, 00:23:36.723 "zcopy": false, 00:23:36.723 "get_zone_info": false, 00:23:36.723 "zone_management": false, 00:23:36.723 "zone_append": false, 00:23:36.723 "compare": true, 00:23:36.723 "compare_and_write": true, 00:23:36.723 "abort": true, 00:23:36.723 "seek_hole": false, 00:23:36.723 "seek_data": false, 00:23:36.723 "copy": true, 00:23:36.723 "nvme_iov_md": false 00:23:36.723 }, 00:23:36.723 "memory_domains": [ 00:23:36.723 { 00:23:36.723 "dma_device_id": "system", 00:23:36.723 "dma_device_type": 1 00:23:36.723 } 00:23:36.723 ], 00:23:36.723 "driver_specific": { 00:23:36.723 "nvme": [ 00:23:36.723 { 00:23:36.723 "trid": { 00:23:36.723 "trtype": "TCP", 00:23:36.723 "adrfam": "IPv4", 00:23:36.723 "traddr": "10.0.0.2", 00:23:36.723 "trsvcid": "4420", 00:23:36.723 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.723 }, 00:23:36.723 "ctrlr_data": { 00:23:36.723 "cntlid": 1, 00:23:36.723 "vendor_id": "0x8086", 00:23:36.723 "model_number": "SPDK bdev Controller", 00:23:36.723 "serial_number": "SPDK0", 00:23:36.723 "firmware_revision": "24.09", 00:23:36.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.723 "oacs": { 00:23:36.723 "security": 0, 00:23:36.723 "format": 0, 00:23:36.723 "firmware": 0, 00:23:36.723 "ns_manage": 0 00:23:36.723 }, 00:23:36.723 "multi_ctrlr": true, 00:23:36.723 "ana_reporting": false 00:23:36.723 }, 00:23:36.723 "vs": { 00:23:36.723 "nvme_version": "1.3" 00:23:36.723 }, 00:23:36.723 "ns_data": { 00:23:36.723 "id": 1, 00:23:36.723 "can_share": true 00:23:36.723 } 00:23:36.723 } 00:23:36.723 ], 00:23:36.723 "mp_policy": "active_passive" 00:23:36.723 } 00:23:36.723 } 00:23:36.723 ] 00:23:36.723 02:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1838238 00:23:36.723 02:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:36.723 02:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.981 Running I/O for 10 seconds... 00:23:37.914 Latency(us) 00:23:37.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:37.914 Nvme0n1 : 1.00 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:23:37.914 =================================================================================================================== 00:23:37.914 Total : 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:23:37.914 00:23:38.847 02:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:38.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:38.847 Nvme0n1 : 2.00 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:23:38.847 =================================================================================================================== 00:23:38.847 Total : 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:23:38.847 00:23:39.104 true 00:23:39.104 02:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:39.104 02:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:39.362 02:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:39.362 02:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:39.362 02:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1838238 00:23:39.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:39.927 Nvme0n1 : 3.00 14077.00 54.99 0.00 0.00 0.00 0.00 0.00 00:23:39.927 =================================================================================================================== 00:23:39.927 Total : 14077.00 54.99 0.00 0.00 0.00 0.00 0.00 00:23:39.927 00:23:40.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:40.859 Nvme0n1 : 4.00 13937.25 54.44 0.00 0.00 0.00 0.00 0.00 00:23:40.859 =================================================================================================================== 00:23:40.859 Total : 13937.25 54.44 0.00 0.00 0.00 0.00 0.00 00:23:40.859 00:23:42.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:42.232 Nvme0n1 : 5.00 13845.80 54.09 0.00 0.00 0.00 0.00 0.00 00:23:42.232 =================================================================================================================== 00:23:42.232 Total : 13845.80 54.09 0.00 0.00 0.00 0.00 0.00 00:23:42.232 00:23:43.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:43.165 Nvme0n1 : 6.00 13799.50 53.90 0.00 0.00 0.00 0.00 0.00 00:23:43.165 =================================================================================================================== 00:23:43.165 Total : 13799.50 53.90 0.00 0.00 0.00 0.00 0.00 00:23:43.165 00:23:44.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:44.097 Nvme0n1 : 7.00 13764.14 53.77 0.00 0.00 0.00 0.00 0.00 00:23:44.097 =================================================================================================================== 00:23:44.097 Total : 13764.14 53.77 0.00 0.00 0.00 0.00 0.00 00:23:44.097 00:23:45.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:45.029 Nvme0n1 : 8.00 13741.62 53.68 0.00 0.00 0.00 0.00 0.00 00:23:45.029 =================================================================================================================== 00:23:45.029 Total : 13741.62 53.68 0.00 0.00 0.00 0.00 0.00 00:23:45.029 00:23:45.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:45.963 Nvme0n1 : 9.00 13724.11 53.61 0.00 0.00 0.00 0.00 0.00 00:23:45.963 =================================================================================================================== 00:23:45.963 Total : 13724.11 53.61 0.00 0.00 0.00 0.00 0.00 00:23:45.963 00:23:46.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:46.899 Nvme0n1 : 10.00 13709.30 53.55 0.00 0.00 0.00 0.00 0.00 00:23:46.899 =================================================================================================================== 00:23:46.899 Total : 13709.30 53.55 0.00 0.00 0.00 0.00 0.00 00:23:46.899 00:23:46.899 00:23:46.899 Latency(us) 00:23:46.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:46.899 Nvme0n1 : 10.01 13709.54 53.55 0.00 0.00 9327.26 3373.89 19709.35 00:23:46.899 =================================================================================================================== 00:23:46.899 Total : 13709.54 53.55 0.00 0.00 9327.26 3373.89 19709.35 00:23:46.899 0 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1838134 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1838134 ']' 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1838134 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1838134 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1838134' 00:23:46.899 killing process with pid 1838134 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1838134 00:23:46.899 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.899 00:23:46.899 Latency(us) 00:23:46.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.899 =================================================================================================================== 00:23:46.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.899 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1838134 00:23:47.157 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:47.416 02:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:47.674 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:47.674 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1836035 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1836035 00:23:48.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1836035 Killed "${NVMF_APP[@]}" "$@" 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1839248 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1839248 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1839248 ']' 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.241 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:48.241 [2024-07-11 02:29:38.447172] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:48.241 [2024-07-11 02:29:38.447262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.241 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.241 [2024-07-11 02:29:38.512410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.241 [2024-07-11 02:29:38.598123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.241 [2024-07-11 02:29:38.598183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.241 [2024-07-11 02:29:38.598199] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.241 [2024-07-11 02:29:38.598213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.241 [2024-07-11 02:29:38.598225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.241 [2024-07-11 02:29:38.598254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.499 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.499 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:23:48.499 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.499 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:48.499 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:48.499 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.499 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:48.757 [2024-07-11 02:29:38.952799] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:48.757 [2024-07-11 02:29:38.952930] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:48.757 [2024-07-11 02:29:38.952984] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:48.757 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:23:48.757 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb 00:23:48.757 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb 00:23:48.757 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:48.757 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:23:48.757 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:48.757 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:48.757 02:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:49.014 02:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb -t 2000 00:23:49.272 [ 00:23:49.272 { 00:23:49.272 "name": "4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb", 00:23:49.272 "aliases": [ 00:23:49.273 "lvs/lvol" 00:23:49.273 ], 00:23:49.273 "product_name": "Logical Volume", 00:23:49.273 "block_size": 4096, 00:23:49.273 "num_blocks": 38912, 00:23:49.273 "uuid": "4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb", 00:23:49.273 "assigned_rate_limits": { 00:23:49.273 "rw_ios_per_sec": 0, 00:23:49.273 "rw_mbytes_per_sec": 0, 00:23:49.273 "r_mbytes_per_sec": 0, 00:23:49.273 "w_mbytes_per_sec": 0 00:23:49.273 }, 00:23:49.273 "claimed": false, 00:23:49.273 "zoned": false, 00:23:49.273 "supported_io_types": { 00:23:49.273 "read": true, 00:23:49.273 "write": true, 00:23:49.273 "unmap": true, 00:23:49.273 "flush": false, 00:23:49.273 "reset": true, 00:23:49.273 "nvme_admin": false, 00:23:49.273 "nvme_io": false, 00:23:49.273 "nvme_io_md": false, 00:23:49.273 "write_zeroes": true, 00:23:49.273 "zcopy": false, 00:23:49.273 "get_zone_info": false, 00:23:49.273 "zone_management": false, 00:23:49.273 "zone_append": false, 00:23:49.273 "compare": false, 00:23:49.273 "compare_and_write": false, 00:23:49.273 "abort": false, 00:23:49.273 "seek_hole": true, 00:23:49.273 "seek_data": true, 00:23:49.273 "copy": false, 00:23:49.273 "nvme_iov_md": false 00:23:49.273 }, 00:23:49.273 "driver_specific": { 00:23:49.273 "lvol": { 00:23:49.273 "lvol_store_uuid": "50aa91a7-bc0e-4b8f-9711-a245baee4ca7", 00:23:49.273 "base_bdev": "aio_bdev", 00:23:49.273 "thin_provision": false, 00:23:49.273 "num_allocated_clusters": 38, 00:23:49.273 "snapshot": false, 00:23:49.273 "clone": false, 00:23:49.273 "esnap_clone": false 00:23:49.273 } 00:23:49.273 } 00:23:49.273 } 00:23:49.273 ] 00:23:49.273 02:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:23:49.273 02:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:23:49.273 02:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:49.531 02:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:23:49.531 02:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:49.531 02:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:23:49.790 02:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:23:49.790 02:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:49.790 [2024-07-11 02:29:40.189701] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:23:50.048 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:50.048 request: 00:23:50.048 { 00:23:50.048 "uuid": "50aa91a7-bc0e-4b8f-9711-a245baee4ca7", 00:23:50.048 "method": "bdev_lvol_get_lvstores", 00:23:50.048 "req_id": 1 00:23:50.048 } 00:23:50.048 Got JSON-RPC error response 00:23:50.048 response: 00:23:50.048 { 00:23:50.048 "code": -19, 00:23:50.048 "message": "No such device" 00:23:50.048 } 00:23:50.305 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:23:50.305 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:50.305 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:50.305 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:50.305 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:50.563 aio_bdev 00:23:50.563 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb 00:23:50.563 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb 00:23:50.563 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:50.563 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:23:50.563 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:50.563 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:50.563 02:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:50.821 02:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb -t 2000 00:23:51.079 [ 00:23:51.079 { 00:23:51.079 "name": "4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb", 00:23:51.079 "aliases": [ 00:23:51.079 "lvs/lvol" 00:23:51.079 ], 00:23:51.079 "product_name": "Logical Volume", 00:23:51.079 "block_size": 4096, 00:23:51.079 "num_blocks": 38912, 00:23:51.079 "uuid": "4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb", 00:23:51.079 "assigned_rate_limits": { 00:23:51.079 "rw_ios_per_sec": 0, 00:23:51.079 "rw_mbytes_per_sec": 0, 00:23:51.079 "r_mbytes_per_sec": 0, 00:23:51.079 "w_mbytes_per_sec": 0 00:23:51.079 }, 00:23:51.079 "claimed": false, 00:23:51.079 "zoned": false, 00:23:51.079 "supported_io_types": { 00:23:51.079 "read": true, 00:23:51.079 "write": true, 00:23:51.079 "unmap": true, 00:23:51.079 "flush": false, 00:23:51.079 "reset": true, 00:23:51.079 "nvme_admin": false, 00:23:51.079 "nvme_io": false, 00:23:51.079 "nvme_io_md": false, 00:23:51.079 "write_zeroes": true, 00:23:51.079 "zcopy": false, 00:23:51.079 "get_zone_info": false, 00:23:51.079 "zone_management": false, 00:23:51.079 "zone_append": false, 00:23:51.079 "compare": false, 00:23:51.079 "compare_and_write": false, 00:23:51.079 "abort": false, 00:23:51.079 "seek_hole": true, 00:23:51.079 "seek_data": true, 00:23:51.079 "copy": false, 00:23:51.079 "nvme_iov_md": false 00:23:51.079 }, 00:23:51.079 "driver_specific": { 00:23:51.079 "lvol": { 00:23:51.079 "lvol_store_uuid": "50aa91a7-bc0e-4b8f-9711-a245baee4ca7", 00:23:51.079 "base_bdev": "aio_bdev", 00:23:51.079 "thin_provision": false, 00:23:51.079 "num_allocated_clusters": 38, 00:23:51.079 "snapshot": false, 00:23:51.079 "clone": false, 00:23:51.079 "esnap_clone": false 00:23:51.079 } 00:23:51.079 } 00:23:51.079 } 00:23:51.079 ] 00:23:51.079 02:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:23:51.079 02:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:51.079 02:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:23:51.338 02:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:23:51.338 02:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:51.338 02:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:23:51.596 02:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:23:51.596 02:29:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4adb4fbd-db5a-4c9d-aeb3-72e6b9fd92bb 00:23:51.853 02:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50aa91a7-bc0e-4b8f-9711-a245baee4ca7 00:23:52.418 02:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:23:52.676 00:23:52.676 real 0m19.883s 00:23:52.676 user 0m49.732s 00:23:52.676 sys 0m4.922s 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:52.676 ************************************ 00:23:52.676 END TEST lvs_grow_dirty 00:23:52.676 ************************************ 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:52.676 nvmf_trace.0 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.676 02:29:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:52.676 rmmod nvme_tcp 00:23:52.676 rmmod nvme_fabrics 00:23:52.676 rmmod nvme_keyring 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1839248 ']' 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1839248 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1839248 ']' 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1839248 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1839248 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1839248' 00:23:52.676 killing process with pid 1839248 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1839248 00:23:52.676 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1839248 00:23:52.936 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:52.936 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:52.936 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:52.936 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:52.936 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:52.936 02:29:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.936 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.936 02:29:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.860 02:29:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:54.860 00:23:54.860 real 0m42.753s 00:23:54.860 user 1m12.143s 00:23:54.860 sys 0m8.759s 00:23:54.860 02:29:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:54.860 02:29:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:54.860 ************************************ 00:23:54.860 END TEST nvmf_lvs_grow 00:23:54.860 ************************************ 00:23:55.119 02:29:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:55.119 02:29:45 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:23:55.119 02:29:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:55.119 02:29:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.119 02:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.119 ************************************ 00:23:55.119 START TEST nvmf_bdev_io_wait 00:23:55.119 ************************************ 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:23:55.119 * Looking for test storage... 00:23:55.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.119 02:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:57.024 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:57.024 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:57.024 Found net devices under 0000:08:00.0: cvl_0_0 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:57.024 Found net devices under 0000:08:00.1: cvl_0_1 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:23:57.024 00:23:57.024 --- 10.0.0.2 ping statistics --- 00:23:57.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.024 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:23:57.024 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:23:57.024 00:23:57.024 --- 10.0.0.1 ping statistics --- 00:23:57.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.025 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1841203 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1841203 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1841203 ']' 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.025 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.025 [2024-07-11 02:29:47.318450] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:57.025 [2024-07-11 02:29:47.318556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.025 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.025 [2024-07-11 02:29:47.384828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.284 [2024-07-11 02:29:47.476133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.284 [2024-07-11 02:29:47.476195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.284 [2024-07-11 02:29:47.476218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.284 [2024-07-11 02:29:47.476238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.284 [2024-07-11 02:29:47.476256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.284 [2024-07-11 02:29:47.479535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.284 [2024-07-11 02:29:47.479635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.284 [2024-07-11 02:29:47.479700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.284 [2024-07-11 02:29:47.479709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.284 [2024-07-11 02:29:47.667579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.284 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.543 Malloc0 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:57.543 [2024-07-11 02:29:47.733918] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1841230 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1841231 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1841234 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.543 { 00:23:57.543 "params": { 00:23:57.543 "name": "Nvme$subsystem", 00:23:57.543 "trtype": "$TEST_TRANSPORT", 00:23:57.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.543 "adrfam": "ipv4", 00:23:57.543 "trsvcid": "$NVMF_PORT", 00:23:57.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.543 "hdgst": ${hdgst:-false}, 00:23:57.543 "ddgst": ${ddgst:-false} 00:23:57.543 }, 00:23:57.543 "method": "bdev_nvme_attach_controller" 00:23:57.543 } 00:23:57.543 EOF 00:23:57.543 )") 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1841236 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.543 { 00:23:57.543 "params": { 00:23:57.543 "name": "Nvme$subsystem", 00:23:57.543 "trtype": "$TEST_TRANSPORT", 00:23:57.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.543 "adrfam": "ipv4", 00:23:57.543 "trsvcid": "$NVMF_PORT", 00:23:57.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.543 "hdgst": ${hdgst:-false}, 00:23:57.543 "ddgst": ${ddgst:-false} 00:23:57.543 }, 00:23:57.543 "method": "bdev_nvme_attach_controller" 00:23:57.543 } 00:23:57.543 EOF 00:23:57.543 )") 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.543 { 00:23:57.543 "params": { 00:23:57.543 "name": "Nvme$subsystem", 00:23:57.543 "trtype": "$TEST_TRANSPORT", 00:23:57.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.543 "adrfam": "ipv4", 00:23:57.543 "trsvcid": "$NVMF_PORT", 00:23:57.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.543 "hdgst": ${hdgst:-false}, 00:23:57.543 "ddgst": ${ddgst:-false} 00:23:57.543 }, 00:23:57.543 "method": "bdev_nvme_attach_controller" 00:23:57.543 } 00:23:57.543 EOF 00:23:57.543 )") 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.543 { 00:23:57.543 "params": { 00:23:57.543 "name": "Nvme$subsystem", 00:23:57.543 "trtype": "$TEST_TRANSPORT", 00:23:57.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.543 "adrfam": "ipv4", 00:23:57.543 "trsvcid": "$NVMF_PORT", 00:23:57.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.543 "hdgst": ${hdgst:-false}, 00:23:57.543 "ddgst": ${ddgst:-false} 00:23:57.543 }, 00:23:57.543 "method": "bdev_nvme_attach_controller" 00:23:57.543 } 00:23:57.543 EOF 00:23:57.543 )") 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1841230 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:57.543 "params": { 00:23:57.543 "name": "Nvme1", 00:23:57.543 "trtype": "tcp", 00:23:57.543 "traddr": "10.0.0.2", 00:23:57.543 "adrfam": "ipv4", 00:23:57.543 "trsvcid": "4420", 00:23:57.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.543 "hdgst": false, 00:23:57.543 "ddgst": false 00:23:57.543 }, 00:23:57.543 "method": "bdev_nvme_attach_controller" 00:23:57.543 }' 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:57.543 "params": { 00:23:57.543 "name": "Nvme1", 00:23:57.543 "trtype": "tcp", 00:23:57.543 "traddr": "10.0.0.2", 00:23:57.543 "adrfam": "ipv4", 00:23:57.543 "trsvcid": "4420", 00:23:57.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.543 "hdgst": false, 00:23:57.543 "ddgst": false 00:23:57.543 }, 00:23:57.543 "method": "bdev_nvme_attach_controller" 00:23:57.543 }' 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:23:57.543 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:57.544 "params": { 00:23:57.544 "name": "Nvme1", 00:23:57.544 "trtype": "tcp", 00:23:57.544 "traddr": "10.0.0.2", 00:23:57.544 "adrfam": "ipv4", 00:23:57.544 "trsvcid": "4420", 00:23:57.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.544 "hdgst": false, 00:23:57.544 "ddgst": false 00:23:57.544 }, 00:23:57.544 "method": "bdev_nvme_attach_controller" 00:23:57.544 }' 00:23:57.544 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:23:57.544 02:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:57.544 "params": { 00:23:57.544 "name": "Nvme1", 00:23:57.544 "trtype": "tcp", 00:23:57.544 "traddr": "10.0.0.2", 00:23:57.544 "adrfam": "ipv4", 00:23:57.544 "trsvcid": "4420", 00:23:57.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.544 "hdgst": false, 00:23:57.544 "ddgst": false 00:23:57.544 }, 00:23:57.544 "method": "bdev_nvme_attach_controller" 00:23:57.544 }' 00:23:57.544 [2024-07-11 02:29:47.785255] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:57.544 [2024-07-11 02:29:47.785255] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:57.544 [2024-07-11 02:29:47.785272] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:57.544 [2024-07-11 02:29:47.785364] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 02:29:47.785364] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 02:29:47.785365] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:57.544 --proc-type=auto ] 00:23:57.544 --proc-type=auto ] 00:23:57.544 [2024-07-11 02:29:47.786154] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:23:57.544 [2024-07-11 02:29:47.786240] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:23:57.544 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.544 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.544 [2024-07-11 02:29:47.927800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.802 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.802 [2024-07-11 02:29:47.994914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:57.802 [2024-07-11 02:29:47.997740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.802 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.802 [2024-07-11 02:29:48.065194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:57.802 [2024-07-11 02:29:48.080091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.802 [2024-07-11 02:29:48.140154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.802 [2024-07-11 02:29:48.153706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:57.802 [2024-07-11 02:29:48.207559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:23:58.061 Running I/O for 1 seconds... 00:23:58.061 Running I/O for 1 seconds... 00:23:58.061 Running I/O for 1 seconds... 00:23:58.320 Running I/O for 1 seconds... 00:23:59.259 00:23:59.259 Latency(us) 00:23:59.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.259 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:23:59.259 Nvme1n1 : 1.00 144784.58 565.56 0.00 0.00 880.61 342.85 1098.33 00:23:59.259 =================================================================================================================== 00:23:59.259 Total : 144784.58 565.56 0.00 0.00 880.61 342.85 1098.33 00:23:59.259 00:23:59.259 Latency(us) 00:23:59.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.259 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:23:59.259 Nvme1n1 : 1.01 10057.88 39.29 0.00 0.00 12671.85 7330.32 22233.69 00:23:59.259 =================================================================================================================== 00:23:59.259 Total : 10057.88 39.29 0.00 0.00 12671.85 7330.32 22233.69 00:23:59.259 00:23:59.259 Latency(us) 00:23:59.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.259 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:23:59.259 Nvme1n1 : 1.01 8224.59 32.13 0.00 0.00 15482.06 9417.77 27185.30 00:23:59.259 =================================================================================================================== 00:23:59.259 Total : 8224.59 32.13 0.00 0.00 15482.06 9417.77 27185.30 00:23:59.259 00:23:59.259 Latency(us) 00:23:59.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.259 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:23:59.259 Nvme1n1 : 1.01 7608.84 29.72 0.00 0.00 16741.64 7524.50 28350.39 00:23:59.259 =================================================================================================================== 00:23:59.259 Total : 7608.84 29.72 0.00 0.00 16741.64 7524.50 28350.39 00:23:59.259 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1841231 00:23:59.259 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1841234 00:23:59.259 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1841236 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.518 rmmod nvme_tcp 00:23:59.518 rmmod nvme_fabrics 00:23:59.518 rmmod nvme_keyring 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1841203 ']' 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1841203 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1841203 ']' 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1841203 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1841203 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1841203' 00:23:59.518 killing process with pid 1841203 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1841203 00:23:59.518 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1841203 00:23:59.778 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.778 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.778 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.778 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.778 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.778 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.778 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.778 02:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.687 02:29:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.687 00:24:01.687 real 0m6.676s 00:24:01.687 user 0m15.220s 00:24:01.687 sys 0m3.353s 00:24:01.687 02:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.687 02:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:01.687 ************************************ 00:24:01.687 END TEST nvmf_bdev_io_wait 00:24:01.687 ************************************ 00:24:01.687 02:29:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:01.687 02:29:52 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:01.687 02:29:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.687 02:29:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.687 02:29:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.687 ************************************ 00:24:01.687 START TEST nvmf_queue_depth 00:24:01.687 ************************************ 00:24:01.687 02:29:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:01.947 * Looking for test storage... 00:24:01.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.947 02:29:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.948 02:29:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:03.858 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:03.858 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:03.858 Found net devices under 0000:08:00.0: cvl_0_0 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:03.858 Found net devices under 0000:08:00.1: cvl_0_1 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:03.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:24:03.858 00:24:03.858 --- 10.0.0.2 ping statistics --- 00:24:03.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.858 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:03.858 00:24:03.858 --- 10.0.0.1 ping statistics --- 00:24:03.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.858 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.858 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1842951 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1842951 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1842951 ']' 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.859 02:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:03.859 [2024-07-11 02:29:54.042281] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:24:03.859 [2024-07-11 02:29:54.042378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.859 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.859 [2024-07-11 02:29:54.107153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.859 [2024-07-11 02:29:54.193332] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.859 [2024-07-11 02:29:54.193401] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.859 [2024-07-11 02:29:54.193418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.859 [2024-07-11 02:29:54.193431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.859 [2024-07-11 02:29:54.193444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.859 [2024-07-11 02:29:54.193474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:04.118 [2024-07-11 02:29:54.315746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:04.118 Malloc0 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:04.118 [2024-07-11 02:29:54.367970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1842972 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1842972 /var/tmp/bdevperf.sock 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1842972 ']' 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.118 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:04.118 [2024-07-11 02:29:54.418621] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:24:04.118 [2024-07-11 02:29:54.418714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842972 ] 00:24:04.118 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.118 [2024-07-11 02:29:54.479397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.377 [2024-07-11 02:29:54.567060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.377 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.377 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:24:04.377 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.377 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.377 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:04.637 NVMe0n1 00:24:04.637 02:29:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.637 02:29:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:04.637 Running I/O for 10 seconds... 00:24:16.829 00:24:16.829 Latency(us) 00:24:16.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.829 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:24:16.829 Verification LBA range: start 0x0 length 0x4000 00:24:16.829 NVMe0n1 : 10.08 7936.89 31.00 0.00 0.00 128335.04 12913.02 77672.30 00:24:16.829 =================================================================================================================== 00:24:16.829 Total : 7936.89 31.00 0.00 0.00 128335.04 12913.02 77672.30 00:24:16.829 0 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1842972 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1842972 ']' 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1842972 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1842972 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1842972' 00:24:16.829 killing process with pid 1842972 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1842972 00:24:16.829 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.829 00:24:16.829 Latency(us) 00:24:16.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.829 =================================================================================================================== 00:24:16.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1842972 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.829 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.830 rmmod nvme_tcp 00:24:16.830 rmmod nvme_fabrics 00:24:16.830 rmmod nvme_keyring 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1842951 ']' 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1842951 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1842951 ']' 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1842951 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1842951 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1842951' 00:24:16.830 killing process with pid 1842951 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1842951 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1842951 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.830 02:30:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.398 02:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:17.398 00:24:17.398 real 0m15.539s 00:24:17.398 user 0m22.434s 00:24:17.398 sys 0m2.675s 00:24:17.398 02:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:17.398 02:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:17.398 ************************************ 00:24:17.398 END TEST nvmf_queue_depth 00:24:17.398 ************************************ 00:24:17.398 02:30:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:17.398 02:30:07 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:24:17.398 02:30:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:17.398 02:30:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:17.398 02:30:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.398 ************************************ 00:24:17.398 START TEST nvmf_target_multipath 00:24:17.398 ************************************ 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:24:17.398 * Looking for test storage... 00:24:17.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.398 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:24:17.399 02:30:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:19.301 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.301 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:19.302 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:19.302 Found net devices under 0000:08:00.0: cvl_0_0 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:19.302 Found net devices under 0000:08:00.1: cvl_0_1 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:24:19.302 00:24:19.302 --- 10.0.0.2 ping statistics --- 00:24:19.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.302 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:24:19.302 00:24:19.302 --- 10.0.0.1 ping statistics --- 00:24:19.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.302 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:24:19.302 only one NIC for nvmf test 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:24:19.302 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:19.303 rmmod nvme_tcp 00:24:19.303 rmmod nvme_fabrics 00:24:19.303 rmmod nvme_keyring 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.303 02:30:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:21.210 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:21.211 00:24:21.211 real 0m3.912s 00:24:21.211 user 0m0.625s 00:24:21.211 sys 0m1.273s 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:21.211 02:30:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:21.211 ************************************ 00:24:21.211 END TEST nvmf_target_multipath 00:24:21.211 ************************************ 00:24:21.211 02:30:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:21.211 02:30:11 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:24:21.211 02:30:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:21.211 02:30:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:21.211 02:30:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:21.211 ************************************ 00:24:21.211 START TEST nvmf_zcopy 00:24:21.211 ************************************ 00:24:21.211 02:30:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:24:21.469 * Looking for test storage... 00:24:21.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:21.469 02:30:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.469 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:24:21.469 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.469 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.469 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.469 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.469 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.469 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.469 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:24:21.470 02:30:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:22.848 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:22.848 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.848 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:22.849 Found net devices under 0000:08:00.0: cvl_0_0 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:22.849 Found net devices under 0000:08:00.1: cvl_0_1 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.849 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:24:23.107 00:24:23.107 --- 10.0.0.2 ping statistics --- 00:24:23.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.107 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:24:23.107 00:24:23.107 --- 10.0.0.1 ping statistics --- 00:24:23.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.107 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1847570 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1847570 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1847570 ']' 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.107 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:23.107 [2024-07-11 02:30:13.439651] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:24:23.107 [2024-07-11 02:30:13.439755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.107 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.107 [2024-07-11 02:30:13.504894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.365 [2024-07-11 02:30:13.594483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.365 [2024-07-11 02:30:13.594556] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.365 [2024-07-11 02:30:13.594574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.365 [2024-07-11 02:30:13.594587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.365 [2024-07-11 02:30:13.594599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.365 [2024-07-11 02:30:13.594640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:23.365 [2024-07-11 02:30:13.725402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:23.365 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:23.366 [2024-07-11 02:30:13.741543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:23.366 malloc0 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:23.366 { 00:24:23.366 "params": { 00:24:23.366 "name": "Nvme$subsystem", 00:24:23.366 "trtype": "$TEST_TRANSPORT", 00:24:23.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:23.366 "adrfam": "ipv4", 00:24:23.366 "trsvcid": "$NVMF_PORT", 00:24:23.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:23.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:23.366 "hdgst": ${hdgst:-false}, 00:24:23.366 "ddgst": ${ddgst:-false} 00:24:23.366 }, 00:24:23.366 "method": "bdev_nvme_attach_controller" 00:24:23.366 } 00:24:23.366 EOF 00:24:23.366 )") 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:24:23.366 02:30:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:23.366 "params": { 00:24:23.366 "name": "Nvme1", 00:24:23.366 "trtype": "tcp", 00:24:23.366 "traddr": "10.0.0.2", 00:24:23.366 "adrfam": "ipv4", 00:24:23.366 "trsvcid": "4420", 00:24:23.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.366 "hdgst": false, 00:24:23.366 "ddgst": false 00:24:23.366 }, 00:24:23.366 "method": "bdev_nvme_attach_controller" 00:24:23.366 }' 00:24:23.624 [2024-07-11 02:30:13.821341] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:24:23.624 [2024-07-11 02:30:13.821433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847595 ] 00:24:23.624 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.624 [2024-07-11 02:30:13.881948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.624 [2024-07-11 02:30:13.972634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.889 Running I/O for 10 seconds... 00:24:33.896 00:24:33.897 Latency(us) 00:24:33.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.897 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:24:33.897 Verification LBA range: start 0x0 length 0x1000 00:24:33.897 Nvme1n1 : 10.02 5339.97 41.72 0.00 0.00 23899.18 3325.35 33010.73 00:24:33.897 =================================================================================================================== 00:24:33.897 Total : 5339.97 41.72 0.00 0.00 23899.18 3325.35 33010.73 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1848556 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:34.155 { 00:24:34.155 "params": { 00:24:34.155 "name": "Nvme$subsystem", 00:24:34.155 "trtype": "$TEST_TRANSPORT", 00:24:34.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:34.155 "adrfam": "ipv4", 00:24:34.155 "trsvcid": "$NVMF_PORT", 00:24:34.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:34.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:34.155 "hdgst": ${hdgst:-false}, 00:24:34.155 "ddgst": ${ddgst:-false} 00:24:34.155 }, 00:24:34.155 "method": "bdev_nvme_attach_controller" 00:24:34.155 } 00:24:34.155 EOF 00:24:34.155 )") 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:24:34.155 [2024-07-11 02:30:24.391028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.155 [2024-07-11 02:30:24.391079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:24:34.155 02:30:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:34.155 "params": { 00:24:34.155 "name": "Nvme1", 00:24:34.155 "trtype": "tcp", 00:24:34.155 "traddr": "10.0.0.2", 00:24:34.155 "adrfam": "ipv4", 00:24:34.155 "trsvcid": "4420", 00:24:34.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.155 "hdgst": false, 00:24:34.155 "ddgst": false 00:24:34.155 }, 00:24:34.155 "method": "bdev_nvme_attach_controller" 00:24:34.155 }' 00:24:34.155 [2024-07-11 02:30:24.398977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.155 [2024-07-11 02:30:24.399002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.155 [2024-07-11 02:30:24.406999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.155 [2024-07-11 02:30:24.407023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.155 [2024-07-11 02:30:24.415024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.155 [2024-07-11 02:30:24.415049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.155 [2024-07-11 02:30:24.423064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.155 [2024-07-11 02:30:24.423095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.155 [2024-07-11 02:30:24.430755] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:24:34.155 [2024-07-11 02:30:24.430824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848556 ] 00:24:34.155 [2024-07-11 02:30:24.431072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.155 [2024-07-11 02:30:24.431094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.155 [2024-07-11 02:30:24.439090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.155 [2024-07-11 02:30:24.439113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.155 [2024-07-11 02:30:24.447109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.155 [2024-07-11 02:30:24.447131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.155 [2024-07-11 02:30:24.455136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.155 [2024-07-11 02:30:24.455161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.155 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.155 [2024-07-11 02:30:24.463160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.463185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.471176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.471198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.479200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.479222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.487229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.487252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.490258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.156 [2024-07-11 02:30:24.495306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.495353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.503326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.503374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.511299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.511324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.519337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.519370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.527358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.527386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.535404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.535460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.543452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.543502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.551416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.551442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.559454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.559488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.567465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.567494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.156 [2024-07-11 02:30:24.575492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.156 [2024-07-11 02:30:24.575531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.577349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.414 [2024-07-11 02:30:24.583500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.583530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.591592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.591641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.603648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.603709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.611656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.611707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.619684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.619737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.627691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.627742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.635701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.635747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.643740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.643788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.651752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.651798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.659726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.659754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.667750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.667777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.675774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.675801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.683795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.683832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.691822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.691848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.699844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.699869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.707867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.707893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.715883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.715908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.723911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.723935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.767753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.767781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.772058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.772082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 Running I/O for 5 seconds... 00:24:34.414 [2024-07-11 02:30:24.780066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.780089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.792991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.793021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.803615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.803646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.816789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.816818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.414 [2024-07-11 02:30:24.828887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.414 [2024-07-11 02:30:24.828917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.841401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.841432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.853823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.853853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.866086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.866116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.877904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.877932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.889959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.889988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.902151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.902180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.914080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.914124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.926117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.926153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.938330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.938358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.950244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.950282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.962090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.962118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.974127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.974161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.986079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.986108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:24.998382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.672 [2024-07-11 02:30:24.998418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.672 [2024-07-11 02:30:25.010660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.673 [2024-07-11 02:30:25.010689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.673 [2024-07-11 02:30:25.022715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.673 [2024-07-11 02:30:25.022744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.673 [2024-07-11 02:30:25.035094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.673 [2024-07-11 02:30:25.035123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.673 [2024-07-11 02:30:25.047123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.673 [2024-07-11 02:30:25.047152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.673 [2024-07-11 02:30:25.059605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.673 [2024-07-11 02:30:25.059633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.673 [2024-07-11 02:30:25.071433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.673 [2024-07-11 02:30:25.071462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.673 [2024-07-11 02:30:25.083150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.673 [2024-07-11 02:30:25.083178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.930 [2024-07-11 02:30:25.095028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.930 [2024-07-11 02:30:25.095078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.930 [2024-07-11 02:30:25.107240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.930 [2024-07-11 02:30:25.107277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.930 [2024-07-11 02:30:25.119189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.930 [2024-07-11 02:30:25.119225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.930 [2024-07-11 02:30:25.131332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.930 [2024-07-11 02:30:25.131361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.930 [2024-07-11 02:30:25.143556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.930 [2024-07-11 02:30:25.143596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.930 [2024-07-11 02:30:25.155825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.930 [2024-07-11 02:30:25.155853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.930 [2024-07-11 02:30:25.168023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.168051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.182150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.182179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.193989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.194026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.206165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.206193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.218347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.218375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.230260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.230289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.242409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.242441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.254457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.254485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.266803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.266839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.278988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.279016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.291244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.291274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.303265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.303298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.315316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.315344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.327491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.327539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:34.931 [2024-07-11 02:30:25.340073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:34.931 [2024-07-11 02:30:25.340105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.352835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.352866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.364835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.364865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.376694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.376730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.390556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.390592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.402142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.402179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.414382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.414411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.426080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.426110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.437895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.437925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.449955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.449984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.462140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.462170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.476187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.476216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.487575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.487604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.500022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.500050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.512664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.512693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.525053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.525082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.537083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.537111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.549379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.549408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.561613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.561644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.575855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.575887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.587469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.587505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.189 [2024-07-11 02:30:25.599707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.189 [2024-07-11 02:30:25.599735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.612441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.612471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.624526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.624554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.636667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.636696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.648688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.648720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.661017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.661053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.673516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.673544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.685655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.685684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.697425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.697453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.709867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.709896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.722134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.722168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.734420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.734448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.746395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.746427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.758586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.758616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.770697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.770725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.783159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.783191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.795094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.795123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.807201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.807229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.819439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.819468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.831563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.831591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.843855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.843885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.855909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.855937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.448 [2024-07-11 02:30:25.868276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.448 [2024-07-11 02:30:25.868304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.880192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.880232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.892873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.892905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.905039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.905070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.919107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.919142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.930490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.930528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.943082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.943112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.954788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.954817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.966808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.966837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.979308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.979338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:25.991325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:25.991354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:26.003300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:26.003329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:26.015517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:26.015547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:26.028090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:26.028120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:26.040585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:26.040614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:26.052801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:26.052831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:26.065101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:26.065132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.706 [2024-07-11 02:30:26.077120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.706 [2024-07-11 02:30:26.077149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.707 [2024-07-11 02:30:26.089252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.707 [2024-07-11 02:30:26.089281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.707 [2024-07-11 02:30:26.101407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.707 [2024-07-11 02:30:26.101436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.707 [2024-07-11 02:30:26.113628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.707 [2024-07-11 02:30:26.113657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.707 [2024-07-11 02:30:26.125612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.707 [2024-07-11 02:30:26.125642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.137944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.137974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.150030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.150059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.162211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.162240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.174484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.174522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.186508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.186545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.198756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.198785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.210503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.210542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.222814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.222844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.237005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.237034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.249021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.249050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.260852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.260881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.272919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.272949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.285416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.285446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.297766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.297807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.310397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.310427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.964 [2024-07-11 02:30:26.322687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.964 [2024-07-11 02:30:26.322716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.965 [2024-07-11 02:30:26.334954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.965 [2024-07-11 02:30:26.334982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.965 [2024-07-11 02:30:26.347088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.965 [2024-07-11 02:30:26.347117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.965 [2024-07-11 02:30:26.359126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.965 [2024-07-11 02:30:26.359156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.965 [2024-07-11 02:30:26.371639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.965 [2024-07-11 02:30:26.371671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:35.965 [2024-07-11 02:30:26.384062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:35.965 [2024-07-11 02:30:26.384091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.396076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.396105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.408232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.408267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.420430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.420460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.432738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.432767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.444834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.444862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.458925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.458953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.470760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.470789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.482937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.482972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.495280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.495309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.507347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.507384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.519847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.519877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.532139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.532176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.544875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.544904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.557243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.557272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.569690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.569719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.582325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.582355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.594375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.594404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.606919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.606948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.619316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.619345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.221 [2024-07-11 02:30:26.631813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.221 [2024-07-11 02:30:26.631841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.644108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.644136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.658896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.658925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.670951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.670979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.683352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.683381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.695566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.695610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.707809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.707837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.720640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.720668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.733326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.733361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.745635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.745663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.758080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.758114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.770610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.770656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.782911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.782940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.795035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.795064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.807150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.807178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.819317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.819346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.831598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.831626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.843517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.843545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.855101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.855129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.867360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.867391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.878992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.879021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.478 [2024-07-11 02:30:26.890494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.478 [2024-07-11 02:30:26.890537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:26.902522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:26.902558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:26.914773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:26.914802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:26.927051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:26.927079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:26.939466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:26.939494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:26.951907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:26.951936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:26.964366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:26.964394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:26.976446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:26.976475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:26.988494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:26.988544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.000854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.000895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.013187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.013223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.025185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.025212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.037266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.037302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.049279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.049307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.061495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.061534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.073474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.073502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.085979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.086009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.098406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.098440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.110780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.110809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.122976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.123005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.137261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.137302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.736 [2024-07-11 02:30:27.148979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.736 [2024-07-11 02:30:27.149010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.161390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.161419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.173268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.173297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.185799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.185828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.197988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.198016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.209938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.209972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.222197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.222226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.234637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.234665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.246751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.246779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.259067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.259095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.271356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.271385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.283629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.283657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.295999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.296027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.308381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.308409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.320848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.320877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.333305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.333340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.345912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.345944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.358273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.358302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.370345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.370374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.382381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.382410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.395043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.395071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:36.994 [2024-07-11 02:30:27.407119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:36.994 [2024-07-11 02:30:27.407147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.419869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.419898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.432363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.432392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.444693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.444721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.456775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.456803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.469101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.469129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.483054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.483082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.494955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.494983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.506786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.506815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.518505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.518541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.530863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.530891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.543060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.543088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.554935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.554963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.567452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.567480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.579928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.579957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.592120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.592149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.604085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.604113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.616039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.616067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.628197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.628226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.640560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.640589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.652656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.652685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.252 [2024-07-11 02:30:27.664675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.252 [2024-07-11 02:30:27.664704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.676923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.676952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.691259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.691287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.702781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.702810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.714811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.714840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.727043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.727071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.739237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.739265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.751598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.751627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.763756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.763784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.776191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.776219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.788084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.788113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.800289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.800317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.812614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.812642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.824444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.824477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.836646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.836674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.849188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.849228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.861457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.861490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.873844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.873873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.885915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.885943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.510 [2024-07-11 02:30:27.898005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.510 [2024-07-11 02:30:27.898033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.511 [2024-07-11 02:30:27.910014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.511 [2024-07-11 02:30:27.910042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.511 [2024-07-11 02:30:27.922218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.511 [2024-07-11 02:30:27.922254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:27.934623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:27.934651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:27.947102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:27.947130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:27.959462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:27.959491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:27.971655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:27.971683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:27.983655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:27.983686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:27.995624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:27.995651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.007473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.007501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.019795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.019824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.032202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.032231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.044281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.044315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.056462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.056495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.068914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.068943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.081142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.081173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.093459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.093495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.106089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.106118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.118384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.118415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.130641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.130669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.142713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.142743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.155200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.155241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.167348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.167376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:37.769 [2024-07-11 02:30:28.179803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:37.769 [2024-07-11 02:30:28.179832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.192202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.192233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.204215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.204244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.216695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.216724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.229398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.229426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.241548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.241577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.253876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.253904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.266027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.266072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.278148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.278176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.290319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.290354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.302212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.302241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.314497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.314534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.326657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.326687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.339273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.339302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.351523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.351552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.363797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.363825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.375848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.375875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.388014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.388068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.399684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.399713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.411377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.411410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.423697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.423725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.435600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.435629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.027 [2024-07-11 02:30:28.447938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.027 [2024-07-11 02:30:28.447966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.460115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.460144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.472290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.472322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.484288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.484317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.498161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.498189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.509864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.509893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.521983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.522011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.534162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.534190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.546739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.546767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.559237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.559277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.571508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.571551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.583878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.583906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.595876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.595904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.607999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.608027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.620030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.620073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.632810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.632838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.645260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.645288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.657741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.657769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.669872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.669900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.682641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.682672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.285 [2024-07-11 02:30:28.695455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.285 [2024-07-11 02:30:28.695483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.707866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.707895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.720449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.720478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.732747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.732777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.744996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.745025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.756983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.757012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.769136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.769165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.781890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.781919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.794079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.794108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.806525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.806555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.818739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.818768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.830819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.830849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.843191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.843219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.855818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.855854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.868482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.868518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.880647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.880677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.893094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.893123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.905523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.905551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.917694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.917723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.929694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.929723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.941774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.941805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.543 [2024-07-11 02:30:28.953980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.543 [2024-07-11 02:30:28.954009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:28.966212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:28.966241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:28.978257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:28.978286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:28.990310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:28.990340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.002445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.002475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.014757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.014786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.027482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.027519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.040053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.040082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.052606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.052636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.064324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.064356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.077285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.077314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.089779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.089815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.102139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.102167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.114508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.114546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.126536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.126564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.138564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.138593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.150626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.150655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.801 [2024-07-11 02:30:29.163544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.801 [2024-07-11 02:30:29.163572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.802 [2024-07-11 02:30:29.176078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.802 [2024-07-11 02:30:29.176106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.802 [2024-07-11 02:30:29.188456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.802 [2024-07-11 02:30:29.188486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.802 [2024-07-11 02:30:29.200694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.802 [2024-07-11 02:30:29.200725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:38.802 [2024-07-11 02:30:29.212841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:38.802 [2024-07-11 02:30:29.212870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.225630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.225660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.237453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.237482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.249297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.249325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.261134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.261163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.272920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.272949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.285231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.285263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.297798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.297827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.309969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.309998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.322623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.322652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.059 [2024-07-11 02:30:29.335558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.059 [2024-07-11 02:30:29.335587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.347868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.347898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.360574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.360609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.373234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.373262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.385863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.385891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.397761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.397789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.409633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.409661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.421997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.422025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.434667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.434696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.446865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.446893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.459174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.459202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.060 [2024-07-11 02:30:29.471461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.060 [2024-07-11 02:30:29.471490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.484552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.484589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.497032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.497061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.508893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.508921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.520960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.520989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.533191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.533219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.545690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.545718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.558110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.558139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.570606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.570638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.582855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.582884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.595144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.595177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.607146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.607182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.619371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.619408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.631217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.631245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.643225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.643254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.655372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.655401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.667859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.667900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.680206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.680235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.692018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.692055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.704086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.704122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.715860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.715887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.318 [2024-07-11 02:30:29.728128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.318 [2024-07-11 02:30:29.728156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.740822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.740851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.753119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.753161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.765325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.765353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.781774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.781809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.793339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.793367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.803328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.803359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 00:24:39.576 Latency(us) 00:24:39.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.576 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:24:39.576 Nvme1n1 : 5.01 10386.09 81.14 0.00 0.00 12307.49 5728.33 20680.25 00:24:39.576 =================================================================================================================== 00:24:39.576 Total : 10386.09 81.14 0.00 0.00 12307.49 5728.33 20680.25 00:24:39.576 [2024-07-11 02:30:29.808556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.808582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.816572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.816600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.824651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.824710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.832667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.832725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.840698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.840753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.848730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.848788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.856752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.856807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.864783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.864845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.872786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.872847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.880810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.880865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.888832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.888889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.896859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.896910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.904857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.904908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.912896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.912963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.920913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.920964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.928925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.928973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.936975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.937033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.944988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.945035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.952934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.952956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.960963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.960987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.576 [2024-07-11 02:30:29.968979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:39.576 [2024-07-11 02:30:29.969001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:39.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1848556) - No such process 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1848556 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:39.577 delay0 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.577 02:30:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:39.835 02:30:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.835 02:30:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:24:39.835 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.835 [2024-07-11 02:30:30.138652] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:24:47.940 Initializing NVMe Controllers 00:24:47.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:47.940 Initialization complete. Launching workers. 00:24:47.940 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 268, failed: 12856 00:24:47.940 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13034, failed to submit 90 00:24:47.940 success 12928, unsuccess 106, failed 0 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.940 rmmod nvme_tcp 00:24:47.940 rmmod nvme_fabrics 00:24:47.940 rmmod nvme_keyring 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1847570 ']' 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1847570 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1847570 ']' 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1847570 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1847570 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1847570' 00:24:47.940 killing process with pid 1847570 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1847570 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1847570 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.940 02:30:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.323 02:30:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.323 00:24:49.323 real 0m27.888s 00:24:49.323 user 0m41.167s 00:24:49.323 sys 0m8.282s 00:24:49.323 02:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.323 02:30:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:49.323 ************************************ 00:24:49.323 END TEST nvmf_zcopy 00:24:49.323 ************************************ 00:24:49.323 02:30:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:49.323 02:30:39 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:24:49.323 02:30:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:49.323 02:30:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:49.323 02:30:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.323 ************************************ 00:24:49.323 START TEST nvmf_nmic 00:24:49.323 ************************************ 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:24:49.323 * Looking for test storage... 00:24:49.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:24:49.323 02:30:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:51.227 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:51.227 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:51.227 Found net devices under 0000:08:00.0: cvl_0_0 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:51.227 Found net devices under 0000:08:00.1: cvl_0_1 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.227 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:51.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:24:51.228 00:24:51.228 --- 10.0.0.2 ping statistics --- 00:24:51.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.228 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:24:51.228 00:24:51.228 --- 10.0.0.1 ping statistics --- 00:24:51.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.228 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1851182 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1851182 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1851182 ']' 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.228 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.228 [2024-07-11 02:30:41.406164] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:24:51.228 [2024-07-11 02:30:41.406254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.228 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.228 [2024-07-11 02:30:41.470394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.228 [2024-07-11 02:30:41.559378] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.228 [2024-07-11 02:30:41.559437] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.228 [2024-07-11 02:30:41.559453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.228 [2024-07-11 02:30:41.559467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.228 [2024-07-11 02:30:41.559480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.228 [2024-07-11 02:30:41.559560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.228 [2024-07-11 02:30:41.559868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.228 [2024-07-11 02:30:41.559919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.228 [2024-07-11 02:30:41.559916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 [2024-07-11 02:30:41.713256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 Malloc0 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 [2024-07-11 02:30:41.763494] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:24:51.487 test case1: single bdev can't be used in multiple subsystems 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 [2024-07-11 02:30:41.787355] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:24:51.487 [2024-07-11 02:30:41.787387] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:24:51.487 [2024-07-11 02:30:41.787404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.487 request: 00:24:51.487 { 00:24:51.487 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:24:51.487 "namespace": { 00:24:51.487 "bdev_name": "Malloc0", 00:24:51.487 "no_auto_visible": false 00:24:51.487 }, 00:24:51.487 "method": "nvmf_subsystem_add_ns", 00:24:51.487 "req_id": 1 00:24:51.487 } 00:24:51.487 Got JSON-RPC error response 00:24:51.487 response: 00:24:51.487 { 00:24:51.487 "code": -32602, 00:24:51.487 "message": "Invalid parameters" 00:24:51.487 } 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:24:51.487 Adding namespace failed - expected result. 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:24:51.487 test case2: host connect to nvmf target in multiple paths 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:51.487 [2024-07-11 02:30:41.795460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.487 02:30:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:52.054 02:30:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:24:52.321 02:30:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:24:52.321 02:30:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:24:52.321 02:30:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.321 02:30:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:52.321 02:30:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:24:54.846 02:30:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:54.846 02:30:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:54.846 02:30:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:24:54.846 02:30:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:54.846 02:30:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.846 02:30:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:24:54.846 02:30:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:24:54.846 [global] 00:24:54.846 thread=1 00:24:54.846 invalidate=1 00:24:54.846 rw=write 00:24:54.846 time_based=1 00:24:54.846 runtime=1 00:24:54.846 ioengine=libaio 00:24:54.846 direct=1 00:24:54.846 bs=4096 00:24:54.846 iodepth=1 00:24:54.846 norandommap=0 00:24:54.846 numjobs=1 00:24:54.846 00:24:54.846 verify_dump=1 00:24:54.846 verify_backlog=512 00:24:54.846 verify_state_save=0 00:24:54.846 do_verify=1 00:24:54.846 verify=crc32c-intel 00:24:54.846 [job0] 00:24:54.846 filename=/dev/nvme0n1 00:24:54.846 Could not set queue depth (nvme0n1) 00:24:54.846 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:54.846 fio-3.35 00:24:54.846 Starting 1 thread 00:24:55.780 00:24:55.780 job0: (groupid=0, jobs=1): err= 0: pid=1851580: Thu Jul 11 02:30:46 2024 00:24:55.780 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:24:55.780 slat (nsec): min=9206, max=42799, avg=25308.09, stdev=9259.36 00:24:55.780 clat (usec): min=40839, max=45042, avg=41841.32, stdev=907.48 00:24:55.780 lat (usec): min=40877, max=45085, avg=41866.63, stdev=910.04 00:24:55.780 clat percentiles (usec): 00:24:55.780 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:24:55.780 | 30.00th=[41157], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:24:55.780 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:24:55.780 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:24:55.780 | 99.99th=[44827] 00:24:55.780 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:24:55.780 slat (nsec): min=7849, max=39194, avg=9219.80, stdev=2331.85 00:24:55.780 clat (usec): min=146, max=272, avg=164.18, stdev=14.17 00:24:55.780 lat (usec): min=154, max=292, avg=173.40, stdev=15.28 00:24:55.780 clat percentiles (usec): 00:24:55.780 | 1.00th=[ 149], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:24:55.780 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:24:55.780 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 188], 00:24:55.780 | 99.00th=[ 215], 99.50th=[ 253], 99.90th=[ 273], 99.95th=[ 273], 00:24:55.780 | 99.99th=[ 273] 00:24:55.780 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:24:55.780 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:24:55.780 lat (usec) : 250=95.32%, 500=0.56% 00:24:55.780 lat (msec) : 50=4.12% 00:24:55.780 cpu : usr=0.20%, sys=0.79%, ctx=534, majf=0, minf=1 00:24:55.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:55.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.780 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:55.780 00:24:55.780 Run status group 0 (all jobs): 00:24:55.780 READ: bw=87.0KiB/s (89.0kB/s), 87.0KiB/s-87.0KiB/s (89.0kB/s-89.0kB/s), io=88.0KiB (90.1kB), run=1012-1012msec 00:24:55.780 WRITE: bw=2024KiB/s (2072kB/s), 2024KiB/s-2024KiB/s (2072kB/s-2072kB/s), io=2048KiB (2097kB), run=1012-1012msec 00:24:55.780 00:24:55.780 Disk stats (read/write): 00:24:55.780 nvme0n1: ios=69/512, merge=0/0, ticks=824/81, in_queue=905, util=91.38% 00:24:55.780 02:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:55.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:24:55.780 02:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:55.780 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:24:55.780 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:24:55.780 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.039 rmmod nvme_tcp 00:24:56.039 rmmod nvme_fabrics 00:24:56.039 rmmod nvme_keyring 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1851182 ']' 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1851182 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1851182 ']' 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1851182 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1851182 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1851182' 00:24:56.039 killing process with pid 1851182 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1851182 00:24:56.039 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1851182 00:24:56.299 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.299 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.299 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.299 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.299 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.299 02:30:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.299 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.299 02:30:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.239 02:30:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:58.239 00:24:58.239 real 0m8.986s 00:24:58.239 user 0m20.262s 00:24:58.239 sys 0m1.978s 00:24:58.239 02:30:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:58.239 02:30:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:58.239 ************************************ 00:24:58.239 END TEST nvmf_nmic 00:24:58.239 ************************************ 00:24:58.239 02:30:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:58.239 02:30:48 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:24:58.239 02:30:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:58.239 02:30:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.239 02:30:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.239 ************************************ 00:24:58.239 START TEST nvmf_fio_target 00:24:58.239 ************************************ 00:24:58.239 02:30:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:24:58.239 * Looking for test storage... 00:24:58.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.502 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:58.503 02:30:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:59.885 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:59.885 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:59.885 Found net devices under 0000:08:00.0: cvl_0_0 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:59.885 Found net devices under 0000:08:00.1: cvl_0_1 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.885 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.886 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:25:00.144 00:25:00.144 --- 10.0.0.2 ping statistics --- 00:25:00.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.144 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:25:00.144 00:25:00.144 --- 10.0.0.1 ping statistics --- 00:25:00.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.144 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.144 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1853185 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1853185 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1853185 ']' 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.145 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.145 [2024-07-11 02:30:50.445488] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:25:00.145 [2024-07-11 02:30:50.445597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.145 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.145 [2024-07-11 02:30:50.514242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.403 [2024-07-11 02:30:50.605464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.403 [2024-07-11 02:30:50.605534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.403 [2024-07-11 02:30:50.605553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.403 [2024-07-11 02:30:50.605571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.403 [2024-07-11 02:30:50.605583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.403 [2024-07-11 02:30:50.605661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.403 [2024-07-11 02:30:50.605727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.403 [2024-07-11 02:30:50.605789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.403 [2024-07-11 02:30:50.605823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.403 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.403 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:25:00.403 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.403 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:00.403 02:30:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.403 02:30:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.403 02:30:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:00.662 [2024-07-11 02:30:51.025144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.662 02:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:01.228 02:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:25:01.228 02:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:01.487 02:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:25:01.487 02:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:01.747 02:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:25:01.747 02:30:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:02.005 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:25:02.005 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:25:02.263 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:02.522 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:25:02.522 02:30:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:02.781 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:25:02.781 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:03.039 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:25:03.039 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:25:03.300 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:03.560 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:03.560 02:30:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:03.817 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:03.817 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:04.074 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.332 [2024-07-11 02:30:54.604294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.332 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:25:04.590 02:30:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:25:04.849 02:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:05.414 02:30:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:25:05.414 02:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:25:05.414 02:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:05.414 02:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:25:05.414 02:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:25:05.414 02:30:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:25:07.310 02:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:07.310 02:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:07.310 02:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:07.310 02:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:25:07.310 02:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.310 02:30:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:25:07.311 02:30:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:07.311 [global] 00:25:07.311 thread=1 00:25:07.311 invalidate=1 00:25:07.311 rw=write 00:25:07.311 time_based=1 00:25:07.311 runtime=1 00:25:07.311 ioengine=libaio 00:25:07.311 direct=1 00:25:07.311 bs=4096 00:25:07.311 iodepth=1 00:25:07.311 norandommap=0 00:25:07.311 numjobs=1 00:25:07.311 00:25:07.311 verify_dump=1 00:25:07.311 verify_backlog=512 00:25:07.311 verify_state_save=0 00:25:07.311 do_verify=1 00:25:07.311 verify=crc32c-intel 00:25:07.311 [job0] 00:25:07.311 filename=/dev/nvme0n1 00:25:07.311 [job1] 00:25:07.311 filename=/dev/nvme0n2 00:25:07.311 [job2] 00:25:07.311 filename=/dev/nvme0n3 00:25:07.311 [job3] 00:25:07.311 filename=/dev/nvme0n4 00:25:07.311 Could not set queue depth (nvme0n1) 00:25:07.311 Could not set queue depth (nvme0n2) 00:25:07.311 Could not set queue depth (nvme0n3) 00:25:07.311 Could not set queue depth (nvme0n4) 00:25:07.569 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:07.569 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:07.569 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:07.569 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:07.569 fio-3.35 00:25:07.569 Starting 4 threads 00:25:08.944 00:25:08.944 job0: (groupid=0, jobs=1): err= 0: pid=1854014: Thu Jul 11 02:30:59 2024 00:25:08.944 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:25:08.944 slat (nsec): min=7704, max=34564, avg=19795.86, stdev=7740.09 00:25:08.944 clat (usec): min=40906, max=41992, avg=41323.61, stdev=476.27 00:25:08.944 lat (usec): min=40938, max=42008, avg=41343.40, stdev=475.27 00:25:08.944 clat percentiles (usec): 00:25:08.944 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:08.944 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:08.944 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:08.944 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:08.944 | 99.99th=[42206] 00:25:08.944 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:25:08.944 slat (nsec): min=8584, max=46568, avg=12911.75, stdev=6898.09 00:25:08.944 clat (usec): min=164, max=368, avg=256.11, stdev=25.82 00:25:08.944 lat (usec): min=173, max=383, avg=269.02, stdev=26.73 00:25:08.944 clat percentiles (usec): 00:25:08.944 | 1.00th=[ 190], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 241], 00:25:08.944 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:25:08.944 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 302], 00:25:08.944 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 367], 99.95th=[ 367], 00:25:08.944 | 99.99th=[ 367] 00:25:08.944 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:25:08.944 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:25:08.944 lat (usec) : 250=44.09%, 500=51.97% 00:25:08.944 lat (msec) : 50=3.94% 00:25:08.944 cpu : usr=0.20%, sys=1.09%, ctx=535, majf=0, minf=1 00:25:08.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.944 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:08.944 job1: (groupid=0, jobs=1): err= 0: pid=1854015: Thu Jul 11 02:30:59 2024 00:25:08.944 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:25:08.944 slat (nsec): min=8202, max=32536, avg=21017.09, stdev=8303.19 00:25:08.944 clat (usec): min=40738, max=42035, avg=41277.96, stdev=482.22 00:25:08.944 lat (usec): min=40746, max=42049, avg=41298.98, stdev=482.94 00:25:08.944 clat percentiles (usec): 00:25:08.944 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:08.944 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:08.944 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:25:08.944 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:08.944 | 99.99th=[42206] 00:25:08.944 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:25:08.944 slat (nsec): min=7924, max=37505, avg=12565.41, stdev=5585.76 00:25:08.944 clat (usec): min=152, max=418, avg=218.68, stdev=33.47 00:25:08.944 lat (usec): min=161, max=435, avg=231.25, stdev=35.86 00:25:08.944 clat percentiles (usec): 00:25:08.944 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 190], 00:25:08.944 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:25:08.944 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 269], 00:25:08.944 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 420], 99.95th=[ 420], 00:25:08.944 | 99.99th=[ 420] 00:25:08.944 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:25:08.944 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:25:08.944 lat (usec) : 250=82.96%, 500=12.92% 00:25:08.944 lat (msec) : 50=4.12% 00:25:08.944 cpu : usr=0.29%, sys=1.07%, ctx=534, majf=0, minf=1 00:25:08.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.944 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:08.944 job2: (groupid=0, jobs=1): err= 0: pid=1854016: Thu Jul 11 02:30:59 2024 00:25:08.944 read: IOPS=22, BW=90.8KiB/s (93.0kB/s)(92.0KiB/1013msec) 00:25:08.944 slat (nsec): min=8437, max=31427, avg=19281.83, stdev=6750.08 00:25:08.944 clat (usec): min=504, max=41995, avg=39439.98, stdev=8497.05 00:25:08.944 lat (usec): min=524, max=42012, avg=39459.26, stdev=8496.78 00:25:08.944 clat percentiles (usec): 00:25:08.944 | 1.00th=[ 506], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:08.944 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:08.944 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:25:08.944 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:08.944 | 99.99th=[42206] 00:25:08.944 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:25:08.944 slat (nsec): min=7069, max=39820, avg=10802.30, stdev=5486.02 00:25:08.944 clat (usec): min=168, max=312, avg=192.06, stdev=18.78 00:25:08.944 lat (usec): min=176, max=335, avg=202.86, stdev=22.05 00:25:08.945 clat percentiles (usec): 00:25:08.945 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:25:08.945 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:25:08.945 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 221], 00:25:08.945 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 314], 99.95th=[ 314], 00:25:08.945 | 99.99th=[ 314] 00:25:08.945 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:25:08.945 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:25:08.945 lat (usec) : 250=92.90%, 500=2.80%, 750=0.19% 00:25:08.945 lat (msec) : 50=4.11% 00:25:08.945 cpu : usr=0.40%, sys=0.40%, ctx=536, majf=0, minf=1 00:25:08.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.945 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:08.945 job3: (groupid=0, jobs=1): err= 0: pid=1854017: Thu Jul 11 02:30:59 2024 00:25:08.945 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:25:08.945 slat (nsec): min=7948, max=36817, avg=22657.67, stdev=8996.28 00:25:08.945 clat (usec): min=40870, max=42038, avg=41298.80, stdev=492.35 00:25:08.945 lat (usec): min=40878, max=42056, avg=41321.46, stdev=493.87 00:25:08.945 clat percentiles (usec): 00:25:08.945 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:25:08.945 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:08.945 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:08.945 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:08.945 | 99.99th=[42206] 00:25:08.945 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:25:08.945 slat (nsec): min=8638, max=48786, avg=13356.82, stdev=6681.09 00:25:08.945 clat (usec): min=177, max=452, avg=248.69, stdev=38.64 00:25:08.945 lat (usec): min=186, max=488, avg=262.05, stdev=40.05 00:25:08.945 clat percentiles (usec): 00:25:08.945 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 208], 00:25:08.945 | 30.00th=[ 227], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 262], 00:25:08.945 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 302], 00:25:08.945 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 453], 99.95th=[ 453], 00:25:08.945 | 99.99th=[ 453] 00:25:08.945 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:25:08.945 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:25:08.945 lat (usec) : 250=44.47%, 500=51.59% 00:25:08.945 lat (msec) : 50=3.94% 00:25:08.945 cpu : usr=0.70%, sys=0.70%, ctx=534, majf=0, minf=1 00:25:08.945 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.945 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.945 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:08.945 00:25:08.945 Run status group 0 (all jobs): 00:25:08.945 READ: bw=338KiB/s (346kB/s), 83.3KiB/s-90.8KiB/s (85.3kB/s-93.0kB/s), io=348KiB (356kB), run=1004-1029msec 00:25:08.945 WRITE: bw=7961KiB/s (8152kB/s), 1990KiB/s-2040KiB/s (2038kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1029msec 00:25:08.945 00:25:08.945 Disk stats (read/write): 00:25:08.945 nvme0n1: ios=69/512, merge=0/0, ticks=1597/127, in_queue=1724, util=98.30% 00:25:08.945 nvme0n2: ios=42/512, merge=0/0, ticks=732/109, in_queue=841, util=87.49% 00:25:08.945 nvme0n3: ios=19/512, merge=0/0, ticks=742/99, in_queue=841, util=88.94% 00:25:08.945 nvme0n4: ios=45/512, merge=0/0, ticks=1690/121, in_queue=1811, util=98.42% 00:25:08.945 02:30:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:25:08.945 [global] 00:25:08.945 thread=1 00:25:08.945 invalidate=1 00:25:08.945 rw=randwrite 00:25:08.945 time_based=1 00:25:08.945 runtime=1 00:25:08.945 ioengine=libaio 00:25:08.945 direct=1 00:25:08.945 bs=4096 00:25:08.945 iodepth=1 00:25:08.945 norandommap=0 00:25:08.945 numjobs=1 00:25:08.945 00:25:08.945 verify_dump=1 00:25:08.945 verify_backlog=512 00:25:08.945 verify_state_save=0 00:25:08.945 do_verify=1 00:25:08.945 verify=crc32c-intel 00:25:08.945 [job0] 00:25:08.945 filename=/dev/nvme0n1 00:25:08.945 [job1] 00:25:08.945 filename=/dev/nvme0n2 00:25:08.945 [job2] 00:25:08.945 filename=/dev/nvme0n3 00:25:08.945 [job3] 00:25:08.945 filename=/dev/nvme0n4 00:25:08.945 Could not set queue depth (nvme0n1) 00:25:08.945 Could not set queue depth (nvme0n2) 00:25:08.945 Could not set queue depth (nvme0n3) 00:25:08.945 Could not set queue depth (nvme0n4) 00:25:08.945 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:08.945 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:08.945 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:08.945 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:08.945 fio-3.35 00:25:08.945 Starting 4 threads 00:25:10.318 00:25:10.318 job0: (groupid=0, jobs=1): err= 0: pid=1854195: Thu Jul 11 02:31:00 2024 00:25:10.318 read: IOPS=1493, BW=5974KiB/s (6117kB/s)(5980KiB/1001msec) 00:25:10.318 slat (nsec): min=5935, max=39483, avg=12658.26, stdev=4713.46 00:25:10.318 clat (usec): min=199, max=42025, avg=438.07, stdev=2659.47 00:25:10.318 lat (usec): min=206, max=42040, avg=450.73, stdev=2659.93 00:25:10.318 clat percentiles (usec): 00:25:10.318 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 231], 00:25:10.318 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:25:10.318 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 367], 00:25:10.318 | 99.00th=[ 498], 99.50th=[ 3884], 99.90th=[42206], 99.95th=[42206], 00:25:10.318 | 99.99th=[42206] 00:25:10.318 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:25:10.318 slat (nsec): min=7270, max=44973, avg=12578.74, stdev=5165.55 00:25:10.318 clat (usec): min=144, max=831, avg=192.28, stdev=40.09 00:25:10.318 lat (usec): min=153, max=840, avg=204.86, stdev=40.15 00:25:10.318 clat percentiles (usec): 00:25:10.318 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:25:10.318 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 188], 00:25:10.318 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 243], 95.00th=[ 251], 00:25:10.318 | 99.00th=[ 318], 99.50th=[ 347], 99.90th=[ 619], 99.95th=[ 832], 00:25:10.318 | 99.99th=[ 832] 00:25:10.318 bw ( KiB/s): min= 8192, max= 8192, per=44.71%, avg=8192.00, stdev= 0.00, samples=1 00:25:10.318 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:10.318 lat (usec) : 250=71.26%, 500=28.14%, 750=0.30%, 1000=0.03% 00:25:10.318 lat (msec) : 4=0.03%, 20=0.03%, 50=0.20% 00:25:10.318 cpu : usr=3.20%, sys=5.20%, ctx=3031, majf=0, minf=1 00:25:10.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:10.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.318 issued rwts: total=1495,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:10.318 job1: (groupid=0, jobs=1): err= 0: pid=1854196: Thu Jul 11 02:31:00 2024 00:25:10.318 read: IOPS=412, BW=1649KiB/s (1689kB/s)(1656KiB/1004msec) 00:25:10.318 slat (nsec): min=6320, max=36425, avg=12095.17, stdev=3604.75 00:25:10.318 clat (usec): min=202, max=42055, avg=2131.36, stdev=8628.59 00:25:10.318 lat (usec): min=213, max=42070, avg=2143.46, stdev=8630.45 00:25:10.318 clat percentiles (usec): 00:25:10.318 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:25:10.318 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:25:10.318 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 269], 95.00th=[ 474], 00:25:10.318 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:10.318 | 99.99th=[42206] 00:25:10.318 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:25:10.318 slat (nsec): min=6608, max=39394, avg=8983.30, stdev=3724.69 00:25:10.318 clat (usec): min=165, max=429, avg=211.57, stdev=37.63 00:25:10.318 lat (usec): min=174, max=438, avg=220.55, stdev=39.37 00:25:10.318 clat percentiles (usec): 00:25:10.318 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:25:10.318 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 210], 00:25:10.318 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 289], 00:25:10.318 | 99.00th=[ 371], 99.50th=[ 371], 99.90th=[ 429], 99.95th=[ 429], 00:25:10.318 | 99.99th=[ 429] 00:25:10.318 bw ( KiB/s): min= 4096, max= 4096, per=22.36%, avg=4096.00, stdev= 0.00, samples=1 00:25:10.318 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:25:10.318 lat (usec) : 250=88.98%, 500=8.86% 00:25:10.318 lat (msec) : 4=0.11%, 50=2.05% 00:25:10.318 cpu : usr=0.40%, sys=1.10%, ctx=927, majf=0, minf=1 00:25:10.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:10.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.318 issued rwts: total=414,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:10.318 job2: (groupid=0, jobs=1): err= 0: pid=1854197: Thu Jul 11 02:31:00 2024 00:25:10.318 read: IOPS=1084, BW=4338KiB/s (4442kB/s)(4360KiB/1005msec) 00:25:10.318 slat (nsec): min=5039, max=34204, avg=8018.34, stdev=3798.93 00:25:10.318 clat (usec): min=199, max=42279, avg=594.76, stdev=3727.72 00:25:10.318 lat (usec): min=205, max=42290, avg=602.78, stdev=3729.76 00:25:10.318 clat percentiles (usec): 00:25:10.318 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:25:10.318 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 245], 00:25:10.318 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 359], 00:25:10.318 | 99.00th=[ 2507], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:25:10.318 | 99.99th=[42206] 00:25:10.318 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:25:10.318 slat (nsec): min=7138, max=49828, avg=11840.57, stdev=5347.60 00:25:10.318 clat (usec): min=149, max=3754, avg=210.21, stdev=104.46 00:25:10.318 lat (usec): min=157, max=3772, avg=222.05, stdev=104.96 00:25:10.318 clat percentiles (usec): 00:25:10.318 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:25:10.318 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 188], 60.00th=[ 215], 00:25:10.318 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 289], 00:25:10.318 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 644], 99.95th=[ 3752], 00:25:10.318 | 99.99th=[ 3752] 00:25:10.318 bw ( KiB/s): min= 4096, max= 8192, per=33.53%, avg=6144.00, stdev=2896.31, samples=2 00:25:10.318 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:25:10.318 lat (usec) : 250=75.40%, 500=23.69%, 750=0.42%, 1000=0.04% 00:25:10.318 lat (msec) : 4=0.11%, 50=0.34% 00:25:10.318 cpu : usr=1.79%, sys=2.29%, ctx=2629, majf=0, minf=1 00:25:10.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:10.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.318 issued rwts: total=1090,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:10.318 job3: (groupid=0, jobs=1): err= 0: pid=1854200: Thu Jul 11 02:31:00 2024 00:25:10.318 read: IOPS=540, BW=2163KiB/s (2215kB/s)(2176KiB/1006msec) 00:25:10.318 slat (nsec): min=5134, max=37582, avg=7426.31, stdev=5557.04 00:25:10.318 clat (usec): min=213, max=42562, avg=1412.42, stdev=6757.09 00:25:10.318 lat (usec): min=218, max=42579, avg=1419.85, stdev=6760.39 00:25:10.318 clat percentiles (usec): 00:25:10.318 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 231], 00:25:10.318 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 260], 00:25:10.318 | 70.00th=[ 277], 80.00th=[ 310], 90.00th=[ 416], 95.00th=[ 478], 00:25:10.318 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:25:10.318 | 99.99th=[42730] 00:25:10.318 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:25:10.318 slat (nsec): min=6698, max=36881, avg=9584.73, stdev=3033.32 00:25:10.318 clat (usec): min=156, max=753, avg=214.05, stdev=53.72 00:25:10.319 lat (usec): min=166, max=762, avg=223.64, stdev=54.23 00:25:10.319 clat percentiles (usec): 00:25:10.319 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:25:10.319 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:25:10.319 | 70.00th=[ 212], 80.00th=[ 233], 90.00th=[ 277], 95.00th=[ 355], 00:25:10.319 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 537], 99.95th=[ 750], 00:25:10.319 | 99.99th=[ 750] 00:25:10.319 bw ( KiB/s): min= 8192, max= 8192, per=44.71%, avg=8192.00, stdev= 0.00, samples=1 00:25:10.319 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:10.319 lat (usec) : 250=74.49%, 500=24.11%, 750=0.32%, 1000=0.13% 00:25:10.319 lat (msec) : 50=0.96% 00:25:10.319 cpu : usr=0.60%, sys=1.79%, ctx=1568, majf=0, minf=1 00:25:10.319 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:10.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.319 issued rwts: total=544,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:10.319 00:25:10.319 Run status group 0 (all jobs): 00:25:10.319 READ: bw=13.8MiB/s (14.4MB/s), 1649KiB/s-5974KiB/s (1689kB/s-6117kB/s), io=13.8MiB (14.5MB), run=1001-1006msec 00:25:10.319 WRITE: bw=17.9MiB/s (18.8MB/s), 2040KiB/s-6138KiB/s (2089kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1006msec 00:25:10.319 00:25:10.319 Disk stats (read/write): 00:25:10.319 nvme0n1: ios=1074/1446, merge=0/0, ticks=548/267, in_queue=815, util=87.17% 00:25:10.319 nvme0n2: ios=458/512, merge=0/0, ticks=779/105, in_queue=884, util=91.07% 00:25:10.319 nvme0n3: ios=1144/1536, merge=0/0, ticks=1417/319, in_queue=1736, util=98.85% 00:25:10.319 nvme0n4: ios=539/1024, merge=0/0, ticks=604/216, in_queue=820, util=89.71% 00:25:10.319 02:31:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:25:10.319 [global] 00:25:10.319 thread=1 00:25:10.319 invalidate=1 00:25:10.319 rw=write 00:25:10.319 time_based=1 00:25:10.319 runtime=1 00:25:10.319 ioengine=libaio 00:25:10.319 direct=1 00:25:10.319 bs=4096 00:25:10.319 iodepth=128 00:25:10.319 norandommap=0 00:25:10.319 numjobs=1 00:25:10.319 00:25:10.319 verify_dump=1 00:25:10.319 verify_backlog=512 00:25:10.319 verify_state_save=0 00:25:10.319 do_verify=1 00:25:10.319 verify=crc32c-intel 00:25:10.319 [job0] 00:25:10.319 filename=/dev/nvme0n1 00:25:10.319 [job1] 00:25:10.319 filename=/dev/nvme0n2 00:25:10.319 [job2] 00:25:10.319 filename=/dev/nvme0n3 00:25:10.319 [job3] 00:25:10.319 filename=/dev/nvme0n4 00:25:10.319 Could not set queue depth (nvme0n1) 00:25:10.319 Could not set queue depth (nvme0n2) 00:25:10.319 Could not set queue depth (nvme0n3) 00:25:10.319 Could not set queue depth (nvme0n4) 00:25:10.319 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:10.319 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:10.319 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:10.319 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:10.319 fio-3.35 00:25:10.319 Starting 4 threads 00:25:11.690 00:25:11.690 job0: (groupid=0, jobs=1): err= 0: pid=1854384: Thu Jul 11 02:31:01 2024 00:25:11.690 read: IOPS=5198, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1008msec) 00:25:11.690 slat (usec): min=3, max=5124, avg=87.79, stdev=424.19 00:25:11.690 clat (usec): min=3233, max=16852, avg=11559.69, stdev=1279.43 00:25:11.690 lat (usec): min=7860, max=16863, avg=11647.48, stdev=1272.85 00:25:11.690 clat percentiles (usec): 00:25:11.690 | 1.00th=[ 8356], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10683], 00:25:11.690 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:25:11.690 | 70.00th=[12125], 80.00th=[12387], 90.00th=[13042], 95.00th=[13566], 00:25:11.690 | 99.00th=[16188], 99.50th=[16450], 99.90th=[16909], 99.95th=[16909], 00:25:11.690 | 99.99th=[16909] 00:25:11.690 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:25:11.690 slat (usec): min=5, max=29319, avg=86.98, stdev=554.01 00:25:11.690 clat (usec): min=7747, max=39661, avg=11571.39, stdev=3035.31 00:25:11.690 lat (usec): min=7757, max=40302, avg=11658.38, stdev=3054.62 00:25:11.690 clat percentiles (usec): 00:25:11.690 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10814], 00:25:11.690 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:25:11.690 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12387], 95.00th=[12911], 00:25:11.690 | 99.00th=[36439], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:25:11.690 | 99.99th=[39584] 00:25:11.690 bw ( KiB/s): min=21456, max=23536, per=35.05%, avg=22496.00, stdev=1470.78, samples=2 00:25:11.690 iops : min= 5364, max= 5884, avg=5624.00, stdev=367.70, samples=2 00:25:11.690 lat (msec) : 4=0.01%, 10=12.54%, 20=86.86%, 50=0.60% 00:25:11.690 cpu : usr=6.55%, sys=11.42%, ctx=516, majf=0, minf=13 00:25:11.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:11.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:11.690 issued rwts: total=5240,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:11.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:11.690 job1: (groupid=0, jobs=1): err= 0: pid=1854386: Thu Jul 11 02:31:01 2024 00:25:11.690 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:25:11.690 slat (usec): min=4, max=12460, avg=134.23, stdev=743.55 00:25:11.690 clat (usec): min=9811, max=44810, avg=16705.66, stdev=5543.49 00:25:11.690 lat (usec): min=9821, max=44819, avg=16839.89, stdev=5596.75 00:25:11.690 clat percentiles (usec): 00:25:11.690 | 1.00th=[11207], 5.00th=[12649], 10.00th=[13042], 20.00th=[13960], 00:25:11.690 | 30.00th=[14353], 40.00th=[14484], 50.00th=[15533], 60.00th=[15926], 00:25:11.690 | 70.00th=[16188], 80.00th=[16909], 90.00th=[22676], 95.00th=[27657], 00:25:11.690 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:25:11.690 | 99.99th=[44827] 00:25:11.690 write: IOPS=2978, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1005msec); 0 zone resets 00:25:11.690 slat (usec): min=5, max=11663, avg=210.46, stdev=1038.71 00:25:11.690 clat (usec): min=593, max=108414, avg=28257.63, stdev=21567.00 00:25:11.690 lat (msec): min=5, max=108, avg=28.47, stdev=21.67 00:25:11.690 clat percentiles (msec): 00:25:11.690 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 13], 00:25:11.690 | 30.00th=[ 14], 40.00th=[ 19], 50.00th=[ 24], 60.00th=[ 25], 00:25:11.690 | 70.00th=[ 27], 80.00th=[ 43], 90.00th=[ 68], 95.00th=[ 70], 00:25:11.690 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 109], 99.95th=[ 109], 00:25:11.690 | 99.99th=[ 109] 00:25:11.690 bw ( KiB/s): min= 9848, max=13072, per=17.85%, avg=11460.00, stdev=2279.71, samples=2 00:25:11.690 iops : min= 2462, max= 3268, avg=2865.00, stdev=569.93, samples=2 00:25:11.690 lat (usec) : 750=0.02% 00:25:11.690 lat (msec) : 10=7.65%, 20=54.19%, 50=29.01%, 100=8.43%, 250=0.70% 00:25:11.690 cpu : usr=4.08%, sys=6.08%, ctx=350, majf=0, minf=13 00:25:11.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:11.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:11.690 issued rwts: total=2560,2993,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:11.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:11.690 job2: (groupid=0, jobs=1): err= 0: pid=1854393: Thu Jul 11 02:31:01 2024 00:25:11.690 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:25:11.690 slat (usec): min=3, max=5275, avg=104.28, stdev=507.15 00:25:11.690 clat (usec): min=9453, max=18689, avg=13862.29, stdev=1384.99 00:25:11.690 lat (usec): min=9789, max=19735, avg=13966.57, stdev=1393.41 00:25:11.690 clat percentiles (usec): 00:25:11.690 | 1.00th=[10421], 5.00th=[11600], 10.00th=[12387], 20.00th=[12911], 00:25:11.690 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[13960], 00:25:11.690 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15664], 95.00th=[16319], 00:25:11.690 | 99.00th=[17695], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:25:11.690 | 99.99th=[18744] 00:25:11.690 write: IOPS=4407, BW=17.2MiB/s (18.1MB/s)(17.3MiB/1004msec); 0 zone resets 00:25:11.690 slat (usec): min=5, max=44082, avg=120.06, stdev=1038.05 00:25:11.690 clat (usec): min=494, max=98343, avg=13543.83, stdev=4251.43 00:25:11.690 lat (usec): min=4259, max=98366, avg=13663.90, stdev=4437.09 00:25:11.690 clat percentiles (usec): 00:25:11.690 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10945], 20.00th=[12649], 00:25:11.690 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:25:11.690 | 70.00th=[13829], 80.00th=[14484], 90.00th=[15401], 95.00th=[16188], 00:25:11.690 | 99.00th=[17171], 99.50th=[17957], 99.90th=[98042], 99.95th=[98042], 00:25:11.690 | 99.99th=[98042] 00:25:11.690 bw ( KiB/s): min=16384, max=17992, per=26.78%, avg=17188.00, stdev=1137.03, samples=2 00:25:11.690 iops : min= 4096, max= 4498, avg=4297.00, stdev=284.26, samples=2 00:25:11.690 lat (usec) : 500=0.01% 00:25:11.690 lat (msec) : 10=3.05%, 20=96.75%, 50=0.01%, 100=0.18% 00:25:11.690 cpu : usr=6.58%, sys=8.47%, ctx=382, majf=0, minf=11 00:25:11.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:11.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:11.690 issued rwts: total=4096,4425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:11.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:11.690 job3: (groupid=0, jobs=1): err= 0: pid=1854398: Thu Jul 11 02:31:01 2024 00:25:11.690 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:25:11.690 slat (usec): min=3, max=15590, avg=143.82, stdev=923.36 00:25:11.690 clat (usec): min=6335, max=55766, avg=16723.09, stdev=6875.81 00:25:11.690 lat (usec): min=6378, max=55785, avg=16866.91, stdev=6941.29 00:25:11.690 clat percentiles (usec): 00:25:11.690 | 1.00th=[ 6587], 5.00th=[11076], 10.00th=[12518], 20.00th=[13829], 00:25:11.690 | 30.00th=[14091], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:25:11.690 | 70.00th=[15926], 80.00th=[18482], 90.00th=[25035], 95.00th=[31851], 00:25:11.690 | 99.00th=[46400], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:25:11.690 | 99.99th=[55837] 00:25:11.690 write: IOPS=3190, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1015msec); 0 zone resets 00:25:11.691 slat (usec): min=5, max=22758, avg=163.95, stdev=757.25 00:25:11.691 clat (usec): min=3263, max=57712, avg=23000.78, stdev=12066.15 00:25:11.691 lat (usec): min=3272, max=57720, avg=23164.73, stdev=12145.89 00:25:11.691 clat percentiles (usec): 00:25:11.691 | 1.00th=[ 4817], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11863], 00:25:11.691 | 30.00th=[14353], 40.00th=[15401], 50.00th=[22938], 60.00th=[24511], 00:25:11.691 | 70.00th=[25035], 80.00th=[34341], 90.00th=[41681], 95.00th=[46924], 00:25:11.691 | 99.00th=[53740], 99.50th=[55837], 99.90th=[57934], 99.95th=[57934], 00:25:11.691 | 99.99th=[57934] 00:25:11.691 bw ( KiB/s): min=12288, max=12592, per=19.38%, avg=12440.00, stdev=214.96, samples=2 00:25:11.691 iops : min= 3072, max= 3148, avg=3110.00, stdev=53.74, samples=2 00:25:11.691 lat (msec) : 4=0.19%, 10=4.96%, 20=58.84%, 50=34.07%, 100=1.93% 00:25:11.691 cpu : usr=3.84%, sys=6.90%, ctx=393, majf=0, minf=13 00:25:11.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:11.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:11.691 issued rwts: total=3072,3238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:11.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:11.691 00:25:11.691 Run status group 0 (all jobs): 00:25:11.691 READ: bw=57.6MiB/s (60.4MB/s), 9.95MiB/s-20.3MiB/s (10.4MB/s-21.3MB/s), io=58.5MiB (61.3MB), run=1004-1015msec 00:25:11.691 WRITE: bw=62.7MiB/s (65.7MB/s), 11.6MiB/s-21.8MiB/s (12.2MB/s-22.9MB/s), io=63.6MiB (66.7MB), run=1004-1015msec 00:25:11.691 00:25:11.691 Disk stats (read/write): 00:25:11.691 nvme0n1: ios=4650/4663, merge=0/0, ticks=17167/15364, in_queue=32531, util=90.88% 00:25:11.691 nvme0n2: ios=2091/2063, merge=0/0, ticks=17342/34663, in_queue=52005, util=94.11% 00:25:11.691 nvme0n3: ios=3613/3652, merge=0/0, ticks=16177/14285, in_queue=30462, util=99.90% 00:25:11.691 nvme0n4: ios=2358/2560, merge=0/0, ticks=39332/62765, in_queue=102097, util=100.00% 00:25:11.691 02:31:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:25:11.691 [global] 00:25:11.691 thread=1 00:25:11.691 invalidate=1 00:25:11.691 rw=randwrite 00:25:11.691 time_based=1 00:25:11.691 runtime=1 00:25:11.691 ioengine=libaio 00:25:11.691 direct=1 00:25:11.691 bs=4096 00:25:11.691 iodepth=128 00:25:11.691 norandommap=0 00:25:11.691 numjobs=1 00:25:11.691 00:25:11.691 verify_dump=1 00:25:11.691 verify_backlog=512 00:25:11.691 verify_state_save=0 00:25:11.691 do_verify=1 00:25:11.691 verify=crc32c-intel 00:25:11.691 [job0] 00:25:11.691 filename=/dev/nvme0n1 00:25:11.691 [job1] 00:25:11.691 filename=/dev/nvme0n2 00:25:11.691 [job2] 00:25:11.691 filename=/dev/nvme0n3 00:25:11.691 [job3] 00:25:11.691 filename=/dev/nvme0n4 00:25:11.691 Could not set queue depth (nvme0n1) 00:25:11.691 Could not set queue depth (nvme0n2) 00:25:11.691 Could not set queue depth (nvme0n3) 00:25:11.691 Could not set queue depth (nvme0n4) 00:25:11.948 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:11.948 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:11.948 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:11.948 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:11.948 fio-3.35 00:25:11.948 Starting 4 threads 00:25:13.321 00:25:13.321 job0: (groupid=0, jobs=1): err= 0: pid=1854652: Thu Jul 11 02:31:03 2024 00:25:13.321 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:25:13.321 slat (usec): min=2, max=15085, avg=118.61, stdev=871.84 00:25:13.321 clat (usec): min=1694, max=43155, avg=15850.85, stdev=7301.12 00:25:13.321 lat (usec): min=1710, max=46979, avg=15969.46, stdev=7381.45 00:25:13.321 clat percentiles (usec): 00:25:13.321 | 1.00th=[ 4293], 5.00th=[ 7898], 10.00th=[10028], 20.00th=[11338], 00:25:13.321 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12780], 60.00th=[14353], 00:25:13.321 | 70.00th=[15926], 80.00th=[21627], 90.00th=[27657], 95.00th=[32637], 00:25:13.321 | 99.00th=[36439], 99.50th=[39060], 99.90th=[41157], 99.95th=[41681], 00:25:13.321 | 99.99th=[43254] 00:25:13.321 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:25:13.321 slat (usec): min=4, max=18261, avg=128.58, stdev=868.16 00:25:13.321 clat (usec): min=629, max=64574, avg=17200.79, stdev=11497.14 00:25:13.321 lat (usec): min=638, max=64598, avg=17329.37, stdev=11587.15 00:25:13.321 clat percentiles (usec): 00:25:13.321 | 1.00th=[ 2606], 5.00th=[ 6521], 10.00th=[ 8160], 20.00th=[11207], 00:25:13.321 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13304], 60.00th=[14091], 00:25:13.321 | 70.00th=[17433], 80.00th=[22414], 90.00th=[28443], 95.00th=[49021], 00:25:13.321 | 99.00th=[61080], 99.50th=[64226], 99.90th=[64750], 99.95th=[64750], 00:25:13.321 | 99.99th=[64750] 00:25:13.321 bw ( KiB/s): min=14392, max=17392, per=25.31%, avg=15892.00, stdev=2121.32, samples=2 00:25:13.321 iops : min= 3598, max= 4348, avg=3973.00, stdev=530.33, samples=2 00:25:13.321 lat (usec) : 750=0.05%, 1000=0.12% 00:25:13.321 lat (msec) : 2=0.36%, 4=0.64%, 10=11.24%, 20=64.54%, 50=21.43% 00:25:13.321 lat (msec) : 100=1.61% 00:25:13.321 cpu : usr=2.97%, sys=5.25%, ctx=334, majf=0, minf=1 00:25:13.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:13.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:13.321 issued rwts: total=3588,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:13.321 job1: (groupid=0, jobs=1): err= 0: pid=1854653: Thu Jul 11 02:31:03 2024 00:25:13.321 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:25:13.321 slat (usec): min=2, max=24893, avg=132.64, stdev=988.13 00:25:13.321 clat (usec): min=6538, max=86976, avg=16639.65, stdev=13840.52 00:25:13.321 lat (usec): min=6547, max=86980, avg=16772.29, stdev=13908.16 00:25:13.321 clat percentiles (usec): 00:25:13.321 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10814], 00:25:13.321 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:25:13.321 | 70.00th=[13042], 80.00th=[14746], 90.00th=[32900], 95.00th=[49021], 00:25:13.321 | 99.00th=[84411], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:25:13.321 | 99.99th=[86508] 00:25:13.321 write: IOPS=3747, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1004msec); 0 zone resets 00:25:13.321 slat (usec): min=4, max=23488, avg=133.75, stdev=962.19 00:25:13.321 clat (usec): min=543, max=50367, avg=17749.74, stdev=10270.45 00:25:13.321 lat (usec): min=5421, max=50379, avg=17883.49, stdev=10321.76 00:25:13.321 clat percentiles (usec): 00:25:13.321 | 1.00th=[ 5997], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[11076], 00:25:13.321 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12256], 60.00th=[12780], 00:25:13.321 | 70.00th=[17957], 80.00th=[30540], 90.00th=[35914], 95.00th=[38536], 00:25:13.321 | 99.00th=[44303], 99.50th=[44827], 99.90th=[46924], 99.95th=[50594], 00:25:13.321 | 99.99th=[50594] 00:25:13.321 bw ( KiB/s): min=14056, max=15046, per=23.17%, avg=14551.00, stdev=700.04, samples=2 00:25:13.321 iops : min= 3514, max= 3761, avg=3637.50, stdev=174.66, samples=2 00:25:13.321 lat (usec) : 750=0.01% 00:25:13.321 lat (msec) : 10=11.31%, 20=67.40%, 50=18.95%, 100=2.33% 00:25:13.321 cpu : usr=2.79%, sys=4.89%, ctx=322, majf=0, minf=1 00:25:13.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:13.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:13.321 issued rwts: total=3584,3762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:13.321 job2: (groupid=0, jobs=1): err= 0: pid=1854654: Thu Jul 11 02:31:03 2024 00:25:13.321 read: IOPS=3906, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1005msec) 00:25:13.321 slat (usec): min=4, max=17344, avg=122.93, stdev=797.04 00:25:13.321 clat (usec): min=1794, max=58507, avg=16169.20, stdev=8111.12 00:25:13.321 lat (usec): min=4082, max=58522, avg=16292.13, stdev=8195.13 00:25:13.321 clat percentiles (usec): 00:25:13.321 | 1.00th=[ 4948], 5.00th=[11207], 10.00th=[12125], 20.00th=[12780], 00:25:13.321 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:25:13.321 | 70.00th=[14091], 80.00th=[14877], 90.00th=[31065], 95.00th=[35390], 00:25:13.321 | 99.00th=[48497], 99.50th=[48497], 99.90th=[51119], 99.95th=[56886], 00:25:13.321 | 99.99th=[58459] 00:25:13.321 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:25:13.321 slat (usec): min=5, max=16198, avg=116.86, stdev=694.21 00:25:13.321 clat (usec): min=9843, max=38712, avg=15317.44, stdev=5446.79 00:25:13.321 lat (usec): min=9861, max=38761, avg=15434.30, stdev=5502.16 00:25:13.321 clat percentiles (usec): 00:25:13.321 | 1.00th=[10028], 5.00th=[11207], 10.00th=[11600], 20.00th=[12256], 00:25:13.321 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13829], 00:25:13.321 | 70.00th=[14091], 80.00th=[15008], 90.00th=[24773], 95.00th=[28705], 00:25:13.321 | 99.00th=[35914], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:25:13.321 | 99.99th=[38536] 00:25:13.321 bw ( KiB/s): min=12312, max=20480, per=26.11%, avg=16396.00, stdev=5775.65, samples=2 00:25:13.321 iops : min= 3078, max= 5120, avg=4099.00, stdev=1443.91, samples=2 00:25:13.321 lat (msec) : 2=0.01%, 10=1.35%, 20=84.78%, 50=13.81%, 100=0.05% 00:25:13.321 cpu : usr=5.88%, sys=7.47%, ctx=322, majf=0, minf=1 00:25:13.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:13.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:13.321 issued rwts: total=3926,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:13.321 job3: (groupid=0, jobs=1): err= 0: pid=1854655: Thu Jul 11 02:31:03 2024 00:25:13.321 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:25:13.321 slat (usec): min=3, max=23138, avg=143.99, stdev=922.97 00:25:13.321 clat (usec): min=9403, max=41313, avg=18358.36, stdev=5171.53 00:25:13.321 lat (usec): min=9414, max=41339, avg=18502.35, stdev=5241.24 00:25:13.321 clat percentiles (usec): 00:25:13.321 | 1.00th=[10159], 5.00th=[13042], 10.00th=[13829], 20.00th=[14222], 00:25:13.321 | 30.00th=[15008], 40.00th=[15664], 50.00th=[17171], 60.00th=[18482], 00:25:13.321 | 70.00th=[20579], 80.00th=[21627], 90.00th=[25035], 95.00th=[30540], 00:25:13.321 | 99.00th=[35390], 99.50th=[35390], 99.90th=[36439], 99.95th=[40109], 00:25:13.321 | 99.99th=[41157] 00:25:13.321 write: IOPS=3876, BW=15.1MiB/s (15.9MB/s)(15.3MiB/1011msec); 0 zone resets 00:25:13.321 slat (usec): min=5, max=10938, avg=114.06, stdev=785.91 00:25:13.321 clat (usec): min=6583, max=33834, avg=15913.08, stdev=4317.76 00:25:13.321 lat (usec): min=6591, max=33858, avg=16027.14, stdev=4392.75 00:25:13.321 clat percentiles (usec): 00:25:13.321 | 1.00th=[ 7046], 5.00th=[10945], 10.00th=[11469], 20.00th=[12780], 00:25:13.321 | 30.00th=[13698], 40.00th=[14091], 50.00th=[15008], 60.00th=[15795], 00:25:13.321 | 70.00th=[17695], 80.00th=[19006], 90.00th=[21365], 95.00th=[24773], 00:25:13.321 | 99.00th=[29230], 99.50th=[32113], 99.90th=[33817], 99.95th=[33817], 00:25:13.321 | 99.99th=[33817] 00:25:13.321 bw ( KiB/s): min=13944, max=16416, per=24.17%, avg=15180.00, stdev=1747.97, samples=2 00:25:13.321 iops : min= 3486, max= 4104, avg=3795.00, stdev=436.99, samples=2 00:25:13.321 lat (msec) : 10=2.32%, 20=74.12%, 50=23.56% 00:25:13.321 cpu : usr=5.45%, sys=6.73%, ctx=208, majf=0, minf=1 00:25:13.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:13.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:13.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:13.321 issued rwts: total=3584,3919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:13.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:13.321 00:25:13.321 Run status group 0 (all jobs): 00:25:13.321 READ: bw=56.7MiB/s (59.5MB/s), 13.8MiB/s-15.3MiB/s (14.5MB/s-16.0MB/s), io=57.4MiB (60.1MB), run=1004-1011msec 00:25:13.321 WRITE: bw=61.3MiB/s (64.3MB/s), 14.6MiB/s-15.9MiB/s (15.3MB/s-16.7MB/s), io=62.0MiB (65.0MB), run=1004-1011msec 00:25:13.321 00:25:13.321 Disk stats (read/write): 00:25:13.321 nvme0n1: ios=3547/3584, merge=0/0, ticks=36522/37513, in_queue=74035, util=100.00% 00:25:13.321 nvme0n2: ios=2590/3072, merge=0/0, ticks=15849/16058, in_queue=31907, util=97.56% 00:25:13.321 nvme0n3: ios=3130/3487, merge=0/0, ticks=17280/17175, in_queue=34455, util=98.23% 00:25:13.321 nvme0n4: ios=3124/3306, merge=0/0, ticks=31465/26520, in_queue=57985, util=99.79% 00:25:13.321 02:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:25:13.321 02:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1854759 00:25:13.322 02:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:25:13.322 02:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:25:13.322 [global] 00:25:13.322 thread=1 00:25:13.322 invalidate=1 00:25:13.322 rw=read 00:25:13.322 time_based=1 00:25:13.322 runtime=10 00:25:13.322 ioengine=libaio 00:25:13.322 direct=1 00:25:13.322 bs=4096 00:25:13.322 iodepth=1 00:25:13.322 norandommap=1 00:25:13.322 numjobs=1 00:25:13.322 00:25:13.322 [job0] 00:25:13.322 filename=/dev/nvme0n1 00:25:13.322 [job1] 00:25:13.322 filename=/dev/nvme0n2 00:25:13.322 [job2] 00:25:13.322 filename=/dev/nvme0n3 00:25:13.322 [job3] 00:25:13.322 filename=/dev/nvme0n4 00:25:13.322 Could not set queue depth (nvme0n1) 00:25:13.322 Could not set queue depth (nvme0n2) 00:25:13.322 Could not set queue depth (nvme0n3) 00:25:13.322 Could not set queue depth (nvme0n4) 00:25:13.322 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:13.322 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:13.322 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:13.322 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:13.322 fio-3.35 00:25:13.322 Starting 4 threads 00:25:16.600 02:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:25:16.600 02:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:25:16.600 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=40906752, buflen=4096 00:25:16.600 fio: pid=1854834, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:16.600 02:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:16.600 02:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:16.600 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=630784, buflen=4096 00:25:16.600 fio: pid=1854833, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:17.165 02:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:17.165 02:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:17.165 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1888256, buflen=4096 00:25:17.165 fio: pid=1854831, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:17.165 02:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:17.165 02:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:25:17.422 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=44089344, buflen=4096 00:25:17.422 fio: pid=1854832, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:17.422 00:25:17.422 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1854831: Thu Jul 11 02:31:07 2024 00:25:17.422 read: IOPS=130, BW=519KiB/s (531kB/s)(1844KiB/3553msec) 00:25:17.422 slat (usec): min=5, max=12882, avg=41.09, stdev=598.78 00:25:17.422 clat (usec): min=207, max=42016, avg=7612.79, stdev=15696.31 00:25:17.422 lat (usec): min=215, max=53998, avg=7653.91, stdev=15771.33 00:25:17.422 clat percentiles (usec): 00:25:17.422 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 237], 00:25:17.422 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 260], 00:25:17.422 | 70.00th=[ 281], 80.00th=[ 482], 90.00th=[41157], 95.00th=[41157], 00:25:17.422 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:25:17.422 | 99.99th=[42206] 00:25:17.422 bw ( KiB/s): min= 96, max= 3088, per=2.71%, avg=597.33, stdev=1220.18, samples=6 00:25:17.422 iops : min= 24, max= 772, avg=149.33, stdev=305.04, samples=6 00:25:17.422 lat (usec) : 250=49.35%, 500=30.52%, 750=1.73%, 1000=0.22% 00:25:17.422 lat (msec) : 50=17.97% 00:25:17.422 cpu : usr=0.00%, sys=0.28%, ctx=464, majf=0, minf=1 00:25:17.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:17.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.422 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.422 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:17.422 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1854832: Thu Jul 11 02:31:07 2024 00:25:17.422 read: IOPS=2770, BW=10.8MiB/s (11.3MB/s)(42.0MiB/3885msec) 00:25:17.422 slat (usec): min=5, max=14714, avg=12.77, stdev=164.48 00:25:17.422 clat (usec): min=189, max=41984, avg=343.14, stdev=1759.49 00:25:17.422 lat (usec): min=197, max=42059, avg=355.91, stdev=1769.56 00:25:17.422 clat percentiles (usec): 00:25:17.422 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 233], 00:25:17.422 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 273], 60.00th=[ 281], 00:25:17.422 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 314], 00:25:17.422 | 99.00th=[ 474], 99.50th=[ 502], 99.90th=[41157], 99.95th=[41157], 00:25:17.422 | 99.99th=[42206] 00:25:17.422 bw ( KiB/s): min= 399, max=14912, per=55.83%, avg=12282.14, stdev=5279.31, samples=7 00:25:17.422 iops : min= 99, max= 3728, avg=3070.43, stdev=1320.11, samples=7 00:25:17.422 lat (usec) : 250=39.19%, 500=60.26%, 750=0.33%, 1000=0.01% 00:25:17.422 lat (msec) : 2=0.01%, 50=0.19% 00:25:17.422 cpu : usr=1.36%, sys=4.07%, ctx=10771, majf=0, minf=1 00:25:17.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:17.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.422 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.422 issued rwts: total=10765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:17.423 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1854833: Thu Jul 11 02:31:07 2024 00:25:17.423 read: IOPS=47, BW=188KiB/s (192kB/s)(616KiB/3278msec) 00:25:17.423 slat (usec): min=6, max=15700, avg=161.60, stdev=1372.59 00:25:17.423 clat (usec): min=227, max=44017, avg=20973.54, stdev=20509.99 00:25:17.423 lat (usec): min=235, max=56815, avg=21136.06, stdev=20617.94 00:25:17.423 clat percentiles (usec): 00:25:17.423 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:25:17.423 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[41157], 60.00th=[41157], 00:25:17.423 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:25:17.423 | 99.00th=[42206], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:25:17.423 | 99.99th=[43779] 00:25:17.423 bw ( KiB/s): min= 96, max= 688, per=0.89%, avg=196.00, stdev=241.05, samples=6 00:25:17.423 iops : min= 24, max= 172, avg=49.00, stdev=60.26, samples=6 00:25:17.423 lat (usec) : 250=14.84%, 500=33.55%, 750=0.65% 00:25:17.423 lat (msec) : 50=50.32% 00:25:17.423 cpu : usr=0.12%, sys=0.00%, ctx=158, majf=0, minf=1 00:25:17.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:17.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.423 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.423 issued rwts: total=155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:17.423 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1854834: Thu Jul 11 02:31:07 2024 00:25:17.423 read: IOPS=3378, BW=13.2MiB/s (13.8MB/s)(39.0MiB/2956msec) 00:25:17.423 slat (nsec): min=5979, max=59158, avg=11186.18, stdev=4685.66 00:25:17.423 clat (usec): min=203, max=906, avg=279.10, stdev=52.28 00:25:17.423 lat (usec): min=210, max=913, avg=290.29, stdev=52.77 00:25:17.423 clat percentiles (usec): 00:25:17.423 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 245], 00:25:17.423 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:25:17.423 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 330], 00:25:17.423 | 99.00th=[ 510], 99.50th=[ 519], 99.90th=[ 676], 99.95th=[ 717], 00:25:17.423 | 99.99th=[ 906] 00:25:17.423 bw ( KiB/s): min=12224, max=14224, per=61.68%, avg=13569.60, stdev=776.30, samples=5 00:25:17.423 iops : min= 3056, max= 3556, avg=3392.40, stdev=194.07, samples=5 00:25:17.423 lat (usec) : 250=22.23%, 500=75.64%, 750=2.09%, 1000=0.03% 00:25:17.423 cpu : usr=2.57%, sys=6.16%, ctx=9989, majf=0, minf=1 00:25:17.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:17.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.423 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.423 issued rwts: total=9988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:17.423 00:25:17.423 Run status group 0 (all jobs): 00:25:17.423 READ: bw=21.5MiB/s (22.5MB/s), 188KiB/s-13.2MiB/s (192kB/s-13.8MB/s), io=83.5MiB (87.5MB), run=2956-3885msec 00:25:17.423 00:25:17.423 Disk stats (read/write): 00:25:17.423 nvme0n1: ios=456/0, merge=0/0, ticks=3306/0, in_queue=3306, util=95.65% 00:25:17.423 nvme0n2: ios=10803/0, merge=0/0, ticks=3825/0, in_queue=3825, util=98.64% 00:25:17.423 nvme0n3: ios=199/0, merge=0/0, ticks=3219/0, in_queue=3219, util=98.32% 00:25:17.423 nvme0n4: ios=9745/0, merge=0/0, ticks=2630/0, in_queue=2630, util=96.75% 00:25:17.681 02:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:17.681 02:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:25:17.939 02:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:17.939 02:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:25:18.197 02:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:18.197 02:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:25:18.454 02:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:18.454 02:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:25:18.712 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:25:18.712 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1854759 00:25:18.712 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:25:18.712 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:18.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:25:18.970 nvmf hotplug test: fio failed as expected 00:25:18.970 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:19.227 rmmod nvme_tcp 00:25:19.227 rmmod nvme_fabrics 00:25:19.227 rmmod nvme_keyring 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1853185 ']' 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1853185 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1853185 ']' 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1853185 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1853185 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1853185' 00:25:19.227 killing process with pid 1853185 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1853185 00:25:19.227 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1853185 00:25:19.485 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:19.485 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:19.485 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:19.485 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.485 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:19.485 02:31:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.485 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.485 02:31:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.440 02:31:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:21.440 00:25:21.440 real 0m23.203s 00:25:21.440 user 1m22.511s 00:25:21.440 sys 0m6.440s 00:25:21.440 02:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:21.440 02:31:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.440 ************************************ 00:25:21.440 END TEST nvmf_fio_target 00:25:21.440 ************************************ 00:25:21.440 02:31:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:21.440 02:31:11 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:21.440 02:31:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:21.440 02:31:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.440 02:31:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.440 ************************************ 00:25:21.440 START TEST nvmf_bdevio 00:25:21.440 ************************************ 00:25:21.440 02:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:21.698 * Looking for test storage... 00:25:21.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.698 02:31:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:23.605 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:23.605 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:23.605 Found net devices under 0000:08:00.0: cvl_0_0 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:23.605 Found net devices under 0000:08:00.1: cvl_0_1 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:23.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:25:23.605 00:25:23.605 --- 10.0.0.2 ping statistics --- 00:25:23.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.605 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:25:23.605 00:25:23.605 --- 10.0.0.1 ping statistics --- 00:25:23.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.605 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:23.605 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1856867 00:25:23.606 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:25:23.606 02:31:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1856867 00:25:23.606 02:31:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1856867 ']' 00:25:23.606 02:31:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.606 02:31:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.606 02:31:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.606 02:31:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.606 02:31:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:23.606 [2024-07-11 02:31:13.732465] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:25:23.606 [2024-07-11 02:31:13.732571] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.606 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.606 [2024-07-11 02:31:13.796926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.606 [2024-07-11 02:31:13.885103] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.606 [2024-07-11 02:31:13.885163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.606 [2024-07-11 02:31:13.885180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.606 [2024-07-11 02:31:13.885194] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.606 [2024-07-11 02:31:13.885206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.606 [2024-07-11 02:31:13.885306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:23.606 [2024-07-11 02:31:13.885387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:23.606 [2024-07-11 02:31:13.885905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:23.606 [2024-07-11 02:31:13.885915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.606 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.606 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:25:23.606 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:23.606 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.606 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:23.864 [2024-07-11 02:31:14.033243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:23.864 Malloc0 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.864 02:31:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:23.865 [2024-07-11 02:31:14.082646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:23.865 { 00:25:23.865 "params": { 00:25:23.865 "name": "Nvme$subsystem", 00:25:23.865 "trtype": "$TEST_TRANSPORT", 00:25:23.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.865 "adrfam": "ipv4", 00:25:23.865 "trsvcid": "$NVMF_PORT", 00:25:23.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.865 "hdgst": ${hdgst:-false}, 00:25:23.865 "ddgst": ${ddgst:-false} 00:25:23.865 }, 00:25:23.865 "method": "bdev_nvme_attach_controller" 00:25:23.865 } 00:25:23.865 EOF 00:25:23.865 )") 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:25:23.865 02:31:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:23.865 "params": { 00:25:23.865 "name": "Nvme1", 00:25:23.865 "trtype": "tcp", 00:25:23.865 "traddr": "10.0.0.2", 00:25:23.865 "adrfam": "ipv4", 00:25:23.865 "trsvcid": "4420", 00:25:23.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:23.865 "hdgst": false, 00:25:23.865 "ddgst": false 00:25:23.865 }, 00:25:23.865 "method": "bdev_nvme_attach_controller" 00:25:23.865 }' 00:25:23.865 [2024-07-11 02:31:14.130761] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:25:23.865 [2024-07-11 02:31:14.130853] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856986 ] 00:25:23.865 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.865 [2024-07-11 02:31:14.191479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:23.865 [2024-07-11 02:31:14.280644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.865 [2024-07-11 02:31:14.280696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.865 [2024-07-11 02:31:14.280699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.431 I/O targets: 00:25:24.431 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:24.431 00:25:24.431 00:25:24.431 CUnit - A unit testing framework for C - Version 2.1-3 00:25:24.431 http://cunit.sourceforge.net/ 00:25:24.431 00:25:24.431 00:25:24.431 Suite: bdevio tests on: Nvme1n1 00:25:24.431 Test: blockdev write read block ...passed 00:25:24.431 Test: blockdev write zeroes read block ...passed 00:25:24.431 Test: blockdev write zeroes read no split ...passed 00:25:24.431 Test: blockdev write zeroes read split ...passed 00:25:24.431 Test: blockdev write zeroes read split partial ...passed 00:25:24.431 Test: blockdev reset ...[2024-07-11 02:31:14.803812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:24.431 [2024-07-11 02:31:14.803946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148e6e0 (9): Bad file descriptor 00:25:24.431 [2024-07-11 02:31:14.814828] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:24.431 passed 00:25:24.689 Test: blockdev write read 8 blocks ...passed 00:25:24.689 Test: blockdev write read size > 128k ...passed 00:25:24.689 Test: blockdev write read invalid size ...passed 00:25:24.689 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:24.689 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:24.689 Test: blockdev write read max offset ...passed 00:25:24.689 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:24.689 Test: blockdev writev readv 8 blocks ...passed 00:25:24.689 Test: blockdev writev readv 30 x 1block ...passed 00:25:24.689 Test: blockdev writev readv block ...passed 00:25:24.689 Test: blockdev writev readv size > 128k ...passed 00:25:24.689 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:24.689 Test: blockdev comparev and writev ...[2024-07-11 02:31:15.066698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:24.689 [2024-07-11 02:31:15.066740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.689 [2024-07-11 02:31:15.066768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:24.689 [2024-07-11 02:31:15.066786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.689 [2024-07-11 02:31:15.067123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:24.689 [2024-07-11 02:31:15.067149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:24.689 [2024-07-11 02:31:15.067173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:24.689 [2024-07-11 02:31:15.067191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:24.689 [2024-07-11 02:31:15.067524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:24.689 [2024-07-11 02:31:15.067550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:24.689 [2024-07-11 02:31:15.067575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:24.689 [2024-07-11 02:31:15.067592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:24.689 [2024-07-11 02:31:15.067921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:24.689 [2024-07-11 02:31:15.067952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:24.689 [2024-07-11 02:31:15.067978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:24.689 [2024-07-11 02:31:15.067995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:24.689 passed 00:25:24.948 Test: blockdev nvme passthru rw ...passed 00:25:24.948 Test: blockdev nvme passthru vendor specific ...[2024-07-11 02:31:15.149814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.948 [2024-07-11 02:31:15.149847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:24.948 [2024-07-11 02:31:15.150002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.948 [2024-07-11 02:31:15.150026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:24.948 [2024-07-11 02:31:15.150178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.948 [2024-07-11 02:31:15.150201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:24.948 [2024-07-11 02:31:15.150357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.948 [2024-07-11 02:31:15.150381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:24.948 passed 00:25:24.948 Test: blockdev nvme admin passthru ...passed 00:25:24.948 Test: blockdev copy ...passed 00:25:24.948 00:25:24.948 Run Summary: Type Total Ran Passed Failed Inactive 00:25:24.948 suites 1 1 n/a 0 0 00:25:24.948 tests 23 23 23 0 0 00:25:24.948 asserts 152 152 152 0 n/a 00:25:24.948 00:25:24.948 Elapsed time = 1.202 seconds 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.948 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:25.207 rmmod nvme_tcp 00:25:25.207 rmmod nvme_fabrics 00:25:25.207 rmmod nvme_keyring 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1856867 ']' 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1856867 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1856867 ']' 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1856867 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1856867 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1856867' 00:25:25.207 killing process with pid 1856867 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1856867 00:25:25.207 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1856867 00:25:25.467 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:25.467 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:25.467 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:25.467 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.467 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:25.467 02:31:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.467 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.467 02:31:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.377 02:31:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:27.377 00:25:27.377 real 0m5.840s 00:25:27.377 user 0m10.097s 00:25:27.377 sys 0m1.731s 00:25:27.377 02:31:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:27.377 02:31:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:27.377 ************************************ 00:25:27.377 END TEST nvmf_bdevio 00:25:27.377 ************************************ 00:25:27.377 02:31:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:27.377 02:31:17 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:25:27.377 02:31:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:27.377 02:31:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.377 02:31:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.377 ************************************ 00:25:27.377 START TEST nvmf_auth_target 00:25:27.377 ************************************ 00:25:27.377 02:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:25:27.377 * Looking for test storage... 00:25:27.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:27.377 02:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.377 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:25:27.377 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.377 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.637 02:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:29.018 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:29.018 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:29.018 Found net devices under 0000:08:00.0: cvl_0_0 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:29.018 Found net devices under 0000:08:00.1: cvl_0_1 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.018 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:29.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:25:29.277 00:25:29.277 --- 10.0.0.2 ping statistics --- 00:25:29.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.277 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:25:29.277 00:25:29.277 --- 10.0.0.1 ping statistics --- 00:25:29.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.277 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1858507 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1858507 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1858507 ']' 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:29.277 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1858598 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=da77581bc965a9e4a2c919266461fa56eb406586abc8847d 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.27k 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key da77581bc965a9e4a2c919266461fa56eb406586abc8847d 0 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 da77581bc965a9e4a2c919266461fa56eb406586abc8847d 0 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=da77581bc965a9e4a2c919266461fa56eb406586abc8847d 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.27k 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.27k 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.27k 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1cb836d4f29857970e5d65c414fc5104078e9951063a7b430a33e654cd4fc130 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jKJ 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1cb836d4f29857970e5d65c414fc5104078e9951063a7b430a33e654cd4fc130 3 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1cb836d4f29857970e5d65c414fc5104078e9951063a7b430a33e654cd4fc130 3 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1cb836d4f29857970e5d65c414fc5104078e9951063a7b430a33e654cd4fc130 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:25:29.536 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:29.794 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jKJ 00:25:29.794 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jKJ 00:25:29.794 02:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.jKJ 00:25:29.794 02:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:25:29.794 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=91259e73fc7a64c0cfcf03eb8c31a506 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bO1 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 91259e73fc7a64c0cfcf03eb8c31a506 1 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 91259e73fc7a64c0cfcf03eb8c31a506 1 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=91259e73fc7a64c0cfcf03eb8c31a506 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:25:29.795 02:31:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bO1 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bO1 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.bO1 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c7fbdb7a83cb06dd67db455d8c4f08a04baf081e82b6fb9f 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XSE 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c7fbdb7a83cb06dd67db455d8c4f08a04baf081e82b6fb9f 2 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c7fbdb7a83cb06dd67db455d8c4f08a04baf081e82b6fb9f 2 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c7fbdb7a83cb06dd67db455d8c4f08a04baf081e82b6fb9f 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XSE 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XSE 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.XSE 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0a198963383b50dba90c01ede2beddad8580c98b6b512223 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Joi 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0a198963383b50dba90c01ede2beddad8580c98b6b512223 2 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0a198963383b50dba90c01ede2beddad8580c98b6b512223 2 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0a198963383b50dba90c01ede2beddad8580c98b6b512223 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Joi 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Joi 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Joi 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2c77d25bb2f3203121b19402b30940ad 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Qdr 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2c77d25bb2f3203121b19402b30940ad 1 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2c77d25bb2f3203121b19402b30940ad 1 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2c77d25bb2f3203121b19402b30940ad 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Qdr 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Qdr 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Qdr 00:25:29.795 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=adec28851ea4e0e5042e905ba42a5bda8890654fe8bf2200f81c7ec2dac99279 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.50N 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key adec28851ea4e0e5042e905ba42a5bda8890654fe8bf2200f81c7ec2dac99279 3 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 adec28851ea4e0e5042e905ba42a5bda8890654fe8bf2200f81c7ec2dac99279 3 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=adec28851ea4e0e5042e905ba42a5bda8890654fe8bf2200f81c7ec2dac99279 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.50N 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.50N 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.50N 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1858507 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1858507 ']' 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.053 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.311 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.311 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:25:30.311 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1858598 /var/tmp/host.sock 00:25:30.311 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1858598 ']' 00:25:30.311 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:25:30.311 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.311 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:25:30.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:25:30.311 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.311 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.27k 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.27k 00:25:30.569 02:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.27k 00:25:30.827 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.jKJ ]] 00:25:30.827 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jKJ 00:25:30.827 02:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.827 02:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.827 02:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.827 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jKJ 00:25:30.827 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jKJ 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.bO1 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.bO1 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.bO1 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.XSE ]] 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XSE 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.394 02:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.653 02:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.653 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XSE 00:25:31.653 02:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XSE 00:25:31.911 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:25:31.911 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Joi 00:25:31.911 02:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.911 02:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.911 02:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.911 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Joi 00:25:31.911 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Joi 00:25:32.169 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Qdr ]] 00:25:32.169 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qdr 00:25:32.169 02:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.169 02:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.169 02:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.169 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qdr 00:25:32.169 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Qdr 00:25:32.428 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:25:32.428 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.50N 00:25:32.428 02:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.428 02:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.428 02:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.428 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.50N 00:25:32.428 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.50N 00:25:32.686 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:25:32.686 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:25:32.686 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.686 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:32.686 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:32.686 02:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:32.944 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:25:32.944 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.945 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.203 00:25:33.203 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:33.203 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:33.203 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:33.461 { 00:25:33.461 "cntlid": 1, 00:25:33.461 "qid": 0, 00:25:33.461 "state": "enabled", 00:25:33.461 "thread": "nvmf_tgt_poll_group_000", 00:25:33.461 "listen_address": { 00:25:33.461 "trtype": "TCP", 00:25:33.461 "adrfam": "IPv4", 00:25:33.461 "traddr": "10.0.0.2", 00:25:33.461 "trsvcid": "4420" 00:25:33.461 }, 00:25:33.461 "peer_address": { 00:25:33.461 "trtype": "TCP", 00:25:33.461 "adrfam": "IPv4", 00:25:33.461 "traddr": "10.0.0.1", 00:25:33.461 "trsvcid": "41926" 00:25:33.461 }, 00:25:33.461 "auth": { 00:25:33.461 "state": "completed", 00:25:33.461 "digest": "sha256", 00:25:33.461 "dhgroup": "null" 00:25:33.461 } 00:25:33.461 } 00:25:33.461 ]' 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:33.461 02:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:33.720 02:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:25:35.093 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:35.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:35.093 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:35.093 02:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.093 02:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.093 02:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.093 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:35.093 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:35.093 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.352 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.610 00:25:35.610 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:35.611 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:35.611 02:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:35.869 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.869 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:35.869 02:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.869 02:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.869 02:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.869 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:35.869 { 00:25:35.869 "cntlid": 3, 00:25:35.869 "qid": 0, 00:25:35.869 "state": "enabled", 00:25:35.869 "thread": "nvmf_tgt_poll_group_000", 00:25:35.869 "listen_address": { 00:25:35.869 "trtype": "TCP", 00:25:35.869 "adrfam": "IPv4", 00:25:35.869 "traddr": "10.0.0.2", 00:25:35.869 "trsvcid": "4420" 00:25:35.869 }, 00:25:35.869 "peer_address": { 00:25:35.869 "trtype": "TCP", 00:25:35.869 "adrfam": "IPv4", 00:25:35.869 "traddr": "10.0.0.1", 00:25:35.869 "trsvcid": "41956" 00:25:35.869 }, 00:25:35.869 "auth": { 00:25:35.869 "state": "completed", 00:25:35.869 "digest": "sha256", 00:25:35.869 "dhgroup": "null" 00:25:35.869 } 00:25:35.869 } 00:25:35.869 ]' 00:25:35.869 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:36.128 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:36.128 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:36.128 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:36.128 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:36.128 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:36.128 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:36.128 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:36.386 02:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:25:37.760 02:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:37.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:37.760 02:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:37.760 02:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.760 02:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.760 02:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.760 02:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:37.760 02:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:37.760 02:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.760 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.326 00:25:38.326 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:38.326 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:38.326 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:38.585 { 00:25:38.585 "cntlid": 5, 00:25:38.585 "qid": 0, 00:25:38.585 "state": "enabled", 00:25:38.585 "thread": "nvmf_tgt_poll_group_000", 00:25:38.585 "listen_address": { 00:25:38.585 "trtype": "TCP", 00:25:38.585 "adrfam": "IPv4", 00:25:38.585 "traddr": "10.0.0.2", 00:25:38.585 "trsvcid": "4420" 00:25:38.585 }, 00:25:38.585 "peer_address": { 00:25:38.585 "trtype": "TCP", 00:25:38.585 "adrfam": "IPv4", 00:25:38.585 "traddr": "10.0.0.1", 00:25:38.585 "trsvcid": "41664" 00:25:38.585 }, 00:25:38.585 "auth": { 00:25:38.585 "state": "completed", 00:25:38.585 "digest": "sha256", 00:25:38.585 "dhgroup": "null" 00:25:38.585 } 00:25:38.585 } 00:25:38.585 ]' 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:38.585 02:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:38.843 02:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:25:40.217 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:40.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:40.217 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:40.217 02:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.217 02:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.217 02:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.217 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:40.217 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:40.217 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:40.475 02:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:40.733 00:25:40.733 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:40.733 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:40.733 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:41.298 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.298 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:41.299 { 00:25:41.299 "cntlid": 7, 00:25:41.299 "qid": 0, 00:25:41.299 "state": "enabled", 00:25:41.299 "thread": "nvmf_tgt_poll_group_000", 00:25:41.299 "listen_address": { 00:25:41.299 "trtype": "TCP", 00:25:41.299 "adrfam": "IPv4", 00:25:41.299 "traddr": "10.0.0.2", 00:25:41.299 "trsvcid": "4420" 00:25:41.299 }, 00:25:41.299 "peer_address": { 00:25:41.299 "trtype": "TCP", 00:25:41.299 "adrfam": "IPv4", 00:25:41.299 "traddr": "10.0.0.1", 00:25:41.299 "trsvcid": "41696" 00:25:41.299 }, 00:25:41.299 "auth": { 00:25:41.299 "state": "completed", 00:25:41.299 "digest": "sha256", 00:25:41.299 "dhgroup": "null" 00:25:41.299 } 00:25:41.299 } 00:25:41.299 ]' 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:41.299 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:41.556 02:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:42.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.962 02:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.219 02:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.219 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.219 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.476 00:25:43.476 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:43.476 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:43.476 02:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:43.733 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.734 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:43.734 02:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.734 02:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.734 02:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.734 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:43.734 { 00:25:43.734 "cntlid": 9, 00:25:43.734 "qid": 0, 00:25:43.734 "state": "enabled", 00:25:43.734 "thread": "nvmf_tgt_poll_group_000", 00:25:43.734 "listen_address": { 00:25:43.734 "trtype": "TCP", 00:25:43.734 "adrfam": "IPv4", 00:25:43.734 "traddr": "10.0.0.2", 00:25:43.734 "trsvcid": "4420" 00:25:43.734 }, 00:25:43.734 "peer_address": { 00:25:43.734 "trtype": "TCP", 00:25:43.734 "adrfam": "IPv4", 00:25:43.734 "traddr": "10.0.0.1", 00:25:43.734 "trsvcid": "41722" 00:25:43.734 }, 00:25:43.734 "auth": { 00:25:43.734 "state": "completed", 00:25:43.734 "digest": "sha256", 00:25:43.734 "dhgroup": "ffdhe2048" 00:25:43.734 } 00:25:43.734 } 00:25:43.734 ]' 00:25:43.734 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:43.734 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:43.734 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:43.734 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:43.991 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:43.991 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:43.991 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:43.991 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:44.249 02:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:25:45.621 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:45.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:45.621 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:45.621 02:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.621 02:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.622 02:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.187 00:25:46.187 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:46.187 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:46.187 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:46.446 { 00:25:46.446 "cntlid": 11, 00:25:46.446 "qid": 0, 00:25:46.446 "state": "enabled", 00:25:46.446 "thread": "nvmf_tgt_poll_group_000", 00:25:46.446 "listen_address": { 00:25:46.446 "trtype": "TCP", 00:25:46.446 "adrfam": "IPv4", 00:25:46.446 "traddr": "10.0.0.2", 00:25:46.446 "trsvcid": "4420" 00:25:46.446 }, 00:25:46.446 "peer_address": { 00:25:46.446 "trtype": "TCP", 00:25:46.446 "adrfam": "IPv4", 00:25:46.446 "traddr": "10.0.0.1", 00:25:46.446 "trsvcid": "41746" 00:25:46.446 }, 00:25:46.446 "auth": { 00:25:46.446 "state": "completed", 00:25:46.446 "digest": "sha256", 00:25:46.446 "dhgroup": "ffdhe2048" 00:25:46.446 } 00:25:46.446 } 00:25:46.446 ]' 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:46.446 02:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:46.704 02:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:25:48.078 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:48.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:48.078 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:48.078 02:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.078 02:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.078 02:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.078 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:48.078 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:48.078 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.336 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.594 00:25:48.594 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:48.594 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:48.594 02:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:49.159 { 00:25:49.159 "cntlid": 13, 00:25:49.159 "qid": 0, 00:25:49.159 "state": "enabled", 00:25:49.159 "thread": "nvmf_tgt_poll_group_000", 00:25:49.159 "listen_address": { 00:25:49.159 "trtype": "TCP", 00:25:49.159 "adrfam": "IPv4", 00:25:49.159 "traddr": "10.0.0.2", 00:25:49.159 "trsvcid": "4420" 00:25:49.159 }, 00:25:49.159 "peer_address": { 00:25:49.159 "trtype": "TCP", 00:25:49.159 "adrfam": "IPv4", 00:25:49.159 "traddr": "10.0.0.1", 00:25:49.159 "trsvcid": "45026" 00:25:49.159 }, 00:25:49.159 "auth": { 00:25:49.159 "state": "completed", 00:25:49.159 "digest": "sha256", 00:25:49.159 "dhgroup": "ffdhe2048" 00:25:49.159 } 00:25:49.159 } 00:25:49.159 ]' 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:49.159 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:49.418 02:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:25:50.793 02:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:50.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:50.793 02:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:50.793 02:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.793 02:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.793 02:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.793 02:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:50.793 02:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.793 02:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:51.051 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:51.309 00:25:51.309 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:51.309 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:51.309 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:51.567 { 00:25:51.567 "cntlid": 15, 00:25:51.567 "qid": 0, 00:25:51.567 "state": "enabled", 00:25:51.567 "thread": "nvmf_tgt_poll_group_000", 00:25:51.567 "listen_address": { 00:25:51.567 "trtype": "TCP", 00:25:51.567 "adrfam": "IPv4", 00:25:51.567 "traddr": "10.0.0.2", 00:25:51.567 "trsvcid": "4420" 00:25:51.567 }, 00:25:51.567 "peer_address": { 00:25:51.567 "trtype": "TCP", 00:25:51.567 "adrfam": "IPv4", 00:25:51.567 "traddr": "10.0.0.1", 00:25:51.567 "trsvcid": "45046" 00:25:51.567 }, 00:25:51.567 "auth": { 00:25:51.567 "state": "completed", 00:25:51.567 "digest": "sha256", 00:25:51.567 "dhgroup": "ffdhe2048" 00:25:51.567 } 00:25:51.567 } 00:25:51.567 ]' 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:51.567 02:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:51.825 02:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:51.825 02:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:51.825 02:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:52.083 02:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:53.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.457 02:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.022 00:25:54.022 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:54.022 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:54.022 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:54.280 { 00:25:54.280 "cntlid": 17, 00:25:54.280 "qid": 0, 00:25:54.280 "state": "enabled", 00:25:54.280 "thread": "nvmf_tgt_poll_group_000", 00:25:54.280 "listen_address": { 00:25:54.280 "trtype": "TCP", 00:25:54.280 "adrfam": "IPv4", 00:25:54.280 "traddr": "10.0.0.2", 00:25:54.280 "trsvcid": "4420" 00:25:54.280 }, 00:25:54.280 "peer_address": { 00:25:54.280 "trtype": "TCP", 00:25:54.280 "adrfam": "IPv4", 00:25:54.280 "traddr": "10.0.0.1", 00:25:54.280 "trsvcid": "45068" 00:25:54.280 }, 00:25:54.280 "auth": { 00:25:54.280 "state": "completed", 00:25:54.280 "digest": "sha256", 00:25:54.280 "dhgroup": "ffdhe3072" 00:25:54.280 } 00:25:54.280 } 00:25:54.280 ]' 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:54.280 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:54.537 02:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:55.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.911 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.477 00:25:56.477 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:56.477 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:56.478 02:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:56.736 { 00:25:56.736 "cntlid": 19, 00:25:56.736 "qid": 0, 00:25:56.736 "state": "enabled", 00:25:56.736 "thread": "nvmf_tgt_poll_group_000", 00:25:56.736 "listen_address": { 00:25:56.736 "trtype": "TCP", 00:25:56.736 "adrfam": "IPv4", 00:25:56.736 "traddr": "10.0.0.2", 00:25:56.736 "trsvcid": "4420" 00:25:56.736 }, 00:25:56.736 "peer_address": { 00:25:56.736 "trtype": "TCP", 00:25:56.736 "adrfam": "IPv4", 00:25:56.736 "traddr": "10.0.0.1", 00:25:56.736 "trsvcid": "45090" 00:25:56.736 }, 00:25:56.736 "auth": { 00:25:56.736 "state": "completed", 00:25:56.736 "digest": "sha256", 00:25:56.736 "dhgroup": "ffdhe3072" 00:25:56.736 } 00:25:56.736 } 00:25:56.736 ]' 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:56.736 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:56.993 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:56.993 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:56.993 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:57.251 02:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:58.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.623 02:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.186 00:25:59.186 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:59.186 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:59.186 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:59.444 { 00:25:59.444 "cntlid": 21, 00:25:59.444 "qid": 0, 00:25:59.444 "state": "enabled", 00:25:59.444 "thread": "nvmf_tgt_poll_group_000", 00:25:59.444 "listen_address": { 00:25:59.444 "trtype": "TCP", 00:25:59.444 "adrfam": "IPv4", 00:25:59.444 "traddr": "10.0.0.2", 00:25:59.444 "trsvcid": "4420" 00:25:59.444 }, 00:25:59.444 "peer_address": { 00:25:59.444 "trtype": "TCP", 00:25:59.444 "adrfam": "IPv4", 00:25:59.444 "traddr": "10.0.0.1", 00:25:59.444 "trsvcid": "56434" 00:25:59.444 }, 00:25:59.444 "auth": { 00:25:59.444 "state": "completed", 00:25:59.444 "digest": "sha256", 00:25:59.444 "dhgroup": "ffdhe3072" 00:25:59.444 } 00:25:59.444 } 00:25:59.444 ]' 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:59.444 02:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:59.702 02:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:26:01.075 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:01.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:01.075 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:01.075 02:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.075 02:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.075 02:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.075 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:01.075 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.075 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:01.333 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:01.591 00:26:01.591 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:01.591 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:01.591 02:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:01.849 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.849 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:01.849 02:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.849 02:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.849 02:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.849 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:01.849 { 00:26:01.849 "cntlid": 23, 00:26:01.849 "qid": 0, 00:26:01.849 "state": "enabled", 00:26:01.849 "thread": "nvmf_tgt_poll_group_000", 00:26:01.849 "listen_address": { 00:26:01.849 "trtype": "TCP", 00:26:01.849 "adrfam": "IPv4", 00:26:01.849 "traddr": "10.0.0.2", 00:26:01.849 "trsvcid": "4420" 00:26:01.849 }, 00:26:01.849 "peer_address": { 00:26:01.849 "trtype": "TCP", 00:26:01.849 "adrfam": "IPv4", 00:26:01.849 "traddr": "10.0.0.1", 00:26:01.849 "trsvcid": "56460" 00:26:01.849 }, 00:26:01.849 "auth": { 00:26:01.849 "state": "completed", 00:26:01.849 "digest": "sha256", 00:26:01.849 "dhgroup": "ffdhe3072" 00:26:01.849 } 00:26:01.849 } 00:26:01.849 ]' 00:26:01.849 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:02.107 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:02.107 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:02.107 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:26:02.107 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:02.107 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:02.107 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:02.107 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:02.365 02:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:26:03.740 02:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:03.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:03.740 02:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:03.740 02:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.740 02:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.740 02:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.740 02:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.740 02:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:03.740 02:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.740 02:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.036 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.316 00:26:04.316 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:04.316 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:04.316 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:04.575 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.575 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:04.575 02:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.575 02:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.575 02:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.575 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:04.575 { 00:26:04.575 "cntlid": 25, 00:26:04.575 "qid": 0, 00:26:04.575 "state": "enabled", 00:26:04.575 "thread": "nvmf_tgt_poll_group_000", 00:26:04.575 "listen_address": { 00:26:04.575 "trtype": "TCP", 00:26:04.575 "adrfam": "IPv4", 00:26:04.575 "traddr": "10.0.0.2", 00:26:04.575 "trsvcid": "4420" 00:26:04.575 }, 00:26:04.575 "peer_address": { 00:26:04.575 "trtype": "TCP", 00:26:04.575 "adrfam": "IPv4", 00:26:04.575 "traddr": "10.0.0.1", 00:26:04.575 "trsvcid": "56474" 00:26:04.575 }, 00:26:04.575 "auth": { 00:26:04.575 "state": "completed", 00:26:04.575 "digest": "sha256", 00:26:04.575 "dhgroup": "ffdhe4096" 00:26:04.575 } 00:26:04.575 } 00:26:04.575 ]' 00:26:04.575 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:04.575 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:04.575 02:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:04.833 02:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:04.833 02:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:04.833 02:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:04.833 02:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:04.833 02:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:05.091 02:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:06.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.465 02:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.031 00:26:07.031 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:07.031 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:07.031 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:07.288 { 00:26:07.288 "cntlid": 27, 00:26:07.288 "qid": 0, 00:26:07.288 "state": "enabled", 00:26:07.288 "thread": "nvmf_tgt_poll_group_000", 00:26:07.288 "listen_address": { 00:26:07.288 "trtype": "TCP", 00:26:07.288 "adrfam": "IPv4", 00:26:07.288 "traddr": "10.0.0.2", 00:26:07.288 "trsvcid": "4420" 00:26:07.288 }, 00:26:07.288 "peer_address": { 00:26:07.288 "trtype": "TCP", 00:26:07.288 "adrfam": "IPv4", 00:26:07.288 "traddr": "10.0.0.1", 00:26:07.288 "trsvcid": "38180" 00:26:07.288 }, 00:26:07.288 "auth": { 00:26:07.288 "state": "completed", 00:26:07.288 "digest": "sha256", 00:26:07.288 "dhgroup": "ffdhe4096" 00:26:07.288 } 00:26:07.288 } 00:26:07.288 ]' 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:07.288 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:07.545 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:07.545 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:07.545 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:07.803 02:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:09.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:09.176 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:09.177 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.177 02:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.177 02:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.177 02:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.177 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.177 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.741 00:26:09.741 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:09.741 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:09.741 02:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:10.000 { 00:26:10.000 "cntlid": 29, 00:26:10.000 "qid": 0, 00:26:10.000 "state": "enabled", 00:26:10.000 "thread": "nvmf_tgt_poll_group_000", 00:26:10.000 "listen_address": { 00:26:10.000 "trtype": "TCP", 00:26:10.000 "adrfam": "IPv4", 00:26:10.000 "traddr": "10.0.0.2", 00:26:10.000 "trsvcid": "4420" 00:26:10.000 }, 00:26:10.000 "peer_address": { 00:26:10.000 "trtype": "TCP", 00:26:10.000 "adrfam": "IPv4", 00:26:10.000 "traddr": "10.0.0.1", 00:26:10.000 "trsvcid": "38194" 00:26:10.000 }, 00:26:10.000 "auth": { 00:26:10.000 "state": "completed", 00:26:10.000 "digest": "sha256", 00:26:10.000 "dhgroup": "ffdhe4096" 00:26:10.000 } 00:26:10.000 } 00:26:10.000 ]' 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:10.000 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:10.258 02:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:26:11.630 02:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:11.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:11.630 02:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:11.630 02:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.630 02:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.630 02:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.630 02:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:11.630 02:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:11.630 02:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:11.888 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:12.453 00:26:12.453 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:12.453 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:12.453 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:12.711 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.711 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:12.711 02:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.711 02:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:12.711 02:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.711 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:12.711 { 00:26:12.711 "cntlid": 31, 00:26:12.711 "qid": 0, 00:26:12.711 "state": "enabled", 00:26:12.711 "thread": "nvmf_tgt_poll_group_000", 00:26:12.711 "listen_address": { 00:26:12.711 "trtype": "TCP", 00:26:12.711 "adrfam": "IPv4", 00:26:12.711 "traddr": "10.0.0.2", 00:26:12.711 "trsvcid": "4420" 00:26:12.711 }, 00:26:12.711 "peer_address": { 00:26:12.711 "trtype": "TCP", 00:26:12.711 "adrfam": "IPv4", 00:26:12.711 "traddr": "10.0.0.1", 00:26:12.711 "trsvcid": "38210" 00:26:12.711 }, 00:26:12.711 "auth": { 00:26:12.711 "state": "completed", 00:26:12.711 "digest": "sha256", 00:26:12.711 "dhgroup": "ffdhe4096" 00:26:12.711 } 00:26:12.711 } 00:26:12.711 ]' 00:26:12.711 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:12.711 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:12.711 02:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:12.711 02:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:12.711 02:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:12.711 02:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:12.711 02:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:12.711 02:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:12.970 02:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:26:14.341 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:14.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:14.341 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:14.341 02:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.341 02:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.341 02:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.341 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.341 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:14.341 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:14.341 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.599 02:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.164 00:26:15.164 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:15.164 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:15.164 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:15.422 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.422 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:15.422 02:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.422 02:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.422 02:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.422 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:15.422 { 00:26:15.422 "cntlid": 33, 00:26:15.422 "qid": 0, 00:26:15.422 "state": "enabled", 00:26:15.422 "thread": "nvmf_tgt_poll_group_000", 00:26:15.422 "listen_address": { 00:26:15.422 "trtype": "TCP", 00:26:15.422 "adrfam": "IPv4", 00:26:15.422 "traddr": "10.0.0.2", 00:26:15.422 "trsvcid": "4420" 00:26:15.422 }, 00:26:15.422 "peer_address": { 00:26:15.422 "trtype": "TCP", 00:26:15.422 "adrfam": "IPv4", 00:26:15.422 "traddr": "10.0.0.1", 00:26:15.422 "trsvcid": "38242" 00:26:15.422 }, 00:26:15.422 "auth": { 00:26:15.422 "state": "completed", 00:26:15.422 "digest": "sha256", 00:26:15.422 "dhgroup": "ffdhe6144" 00:26:15.422 } 00:26:15.422 } 00:26:15.422 ]' 00:26:15.422 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:15.422 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:15.422 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:15.680 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:15.680 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:15.680 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:15.680 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:15.680 02:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:15.939 02:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:17.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:17.312 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:17.313 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.313 02:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.313 02:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:17.313 02:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.313 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.313 02:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.247 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:18.247 { 00:26:18.247 "cntlid": 35, 00:26:18.247 "qid": 0, 00:26:18.247 "state": "enabled", 00:26:18.247 "thread": "nvmf_tgt_poll_group_000", 00:26:18.247 "listen_address": { 00:26:18.247 "trtype": "TCP", 00:26:18.247 "adrfam": "IPv4", 00:26:18.247 "traddr": "10.0.0.2", 00:26:18.247 "trsvcid": "4420" 00:26:18.247 }, 00:26:18.247 "peer_address": { 00:26:18.247 "trtype": "TCP", 00:26:18.247 "adrfam": "IPv4", 00:26:18.247 "traddr": "10.0.0.1", 00:26:18.247 "trsvcid": "43740" 00:26:18.247 }, 00:26:18.247 "auth": { 00:26:18.247 "state": "completed", 00:26:18.247 "digest": "sha256", 00:26:18.247 "dhgroup": "ffdhe6144" 00:26:18.247 } 00:26:18.247 } 00:26:18.247 ]' 00:26:18.247 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:18.504 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:18.504 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:18.504 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:18.504 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:18.504 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:18.504 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:18.504 02:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:18.761 02:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:26:20.131 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:20.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:20.131 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:20.131 02:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.131 02:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.131 02:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.131 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:20.131 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:20.131 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.389 02:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.955 00:26:20.955 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:20.955 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:20.955 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:21.213 { 00:26:21.213 "cntlid": 37, 00:26:21.213 "qid": 0, 00:26:21.213 "state": "enabled", 00:26:21.213 "thread": "nvmf_tgt_poll_group_000", 00:26:21.213 "listen_address": { 00:26:21.213 "trtype": "TCP", 00:26:21.213 "adrfam": "IPv4", 00:26:21.213 "traddr": "10.0.0.2", 00:26:21.213 "trsvcid": "4420" 00:26:21.213 }, 00:26:21.213 "peer_address": { 00:26:21.213 "trtype": "TCP", 00:26:21.213 "adrfam": "IPv4", 00:26:21.213 "traddr": "10.0.0.1", 00:26:21.213 "trsvcid": "43764" 00:26:21.213 }, 00:26:21.213 "auth": { 00:26:21.213 "state": "completed", 00:26:21.213 "digest": "sha256", 00:26:21.213 "dhgroup": "ffdhe6144" 00:26:21.213 } 00:26:21.213 } 00:26:21.213 ]' 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:21.213 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:21.480 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:21.480 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:21.480 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:21.480 02:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:26:22.853 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:22.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:22.853 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:22.853 02:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.853 02:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.853 02:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.853 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:22.853 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:22.853 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:23.111 02:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:23.675 00:26:23.675 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:23.675 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:23.675 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:23.932 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.932 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:23.932 02:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.933 02:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:23.933 02:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.933 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:23.933 { 00:26:23.933 "cntlid": 39, 00:26:23.933 "qid": 0, 00:26:23.933 "state": "enabled", 00:26:23.933 "thread": "nvmf_tgt_poll_group_000", 00:26:23.933 "listen_address": { 00:26:23.933 "trtype": "TCP", 00:26:23.933 "adrfam": "IPv4", 00:26:23.933 "traddr": "10.0.0.2", 00:26:23.933 "trsvcid": "4420" 00:26:23.933 }, 00:26:23.933 "peer_address": { 00:26:23.933 "trtype": "TCP", 00:26:23.933 "adrfam": "IPv4", 00:26:23.933 "traddr": "10.0.0.1", 00:26:23.933 "trsvcid": "43802" 00:26:23.933 }, 00:26:23.933 "auth": { 00:26:23.933 "state": "completed", 00:26:23.933 "digest": "sha256", 00:26:23.933 "dhgroup": "ffdhe6144" 00:26:23.933 } 00:26:23.933 } 00:26:23.933 ]' 00:26:23.933 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:24.190 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:24.190 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:24.190 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:24.190 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:24.190 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:24.190 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:24.190 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:24.447 02:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:26:25.857 02:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:25.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:25.857 02:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:25.857 02:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.857 02:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:25.857 02:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.857 02:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.857 02:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:25.857 02:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:25.857 02:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:25.857 02:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:26:25.857 02:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:25.857 02:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:25.857 02:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:25.857 02:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:25.857 02:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:25.858 02:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.858 02:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.858 02:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:25.858 02:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.858 02:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.858 02:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.231 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:27.231 { 00:26:27.231 "cntlid": 41, 00:26:27.231 "qid": 0, 00:26:27.231 "state": "enabled", 00:26:27.231 "thread": "nvmf_tgt_poll_group_000", 00:26:27.231 "listen_address": { 00:26:27.231 "trtype": "TCP", 00:26:27.231 "adrfam": "IPv4", 00:26:27.231 "traddr": "10.0.0.2", 00:26:27.231 "trsvcid": "4420" 00:26:27.231 }, 00:26:27.231 "peer_address": { 00:26:27.231 "trtype": "TCP", 00:26:27.231 "adrfam": "IPv4", 00:26:27.231 "traddr": "10.0.0.1", 00:26:27.231 "trsvcid": "43840" 00:26:27.231 }, 00:26:27.231 "auth": { 00:26:27.231 "state": "completed", 00:26:27.231 "digest": "sha256", 00:26:27.231 "dhgroup": "ffdhe8192" 00:26:27.231 } 00:26:27.231 } 00:26:27.231 ]' 00:26:27.231 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:27.489 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:27.489 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:27.489 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:27.489 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:27.489 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:27.489 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:27.489 02:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:27.747 02:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:29.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.122 02:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.496 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:30.496 { 00:26:30.496 "cntlid": 43, 00:26:30.496 "qid": 0, 00:26:30.496 "state": "enabled", 00:26:30.496 "thread": "nvmf_tgt_poll_group_000", 00:26:30.496 "listen_address": { 00:26:30.496 "trtype": "TCP", 00:26:30.496 "adrfam": "IPv4", 00:26:30.496 "traddr": "10.0.0.2", 00:26:30.496 "trsvcid": "4420" 00:26:30.496 }, 00:26:30.496 "peer_address": { 00:26:30.496 "trtype": "TCP", 00:26:30.496 "adrfam": "IPv4", 00:26:30.496 "traddr": "10.0.0.1", 00:26:30.496 "trsvcid": "44728" 00:26:30.496 }, 00:26:30.496 "auth": { 00:26:30.496 "state": "completed", 00:26:30.496 "digest": "sha256", 00:26:30.496 "dhgroup": "ffdhe8192" 00:26:30.496 } 00:26:30.496 } 00:26:30.496 ]' 00:26:30.496 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:30.754 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:30.754 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:30.754 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:30.754 02:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:30.754 02:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:30.754 02:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:30.754 02:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:31.012 02:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:26:32.387 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:32.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:32.387 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:32.387 02:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.387 02:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:32.387 02:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.387 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:32.387 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:32.387 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:32.645 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:26:32.645 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:32.645 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:32.645 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:32.645 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:32.645 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:32.646 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.646 02:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.646 02:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:32.646 02:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.646 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.646 02:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.579 00:26:33.579 02:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:33.579 02:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:33.579 02:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:33.837 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.837 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:33.837 02:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.837 02:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.837 02:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.837 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:33.837 { 00:26:33.837 "cntlid": 45, 00:26:33.837 "qid": 0, 00:26:33.837 "state": "enabled", 00:26:33.837 "thread": "nvmf_tgt_poll_group_000", 00:26:33.837 "listen_address": { 00:26:33.837 "trtype": "TCP", 00:26:33.837 "adrfam": "IPv4", 00:26:33.837 "traddr": "10.0.0.2", 00:26:33.837 "trsvcid": "4420" 00:26:33.837 }, 00:26:33.837 "peer_address": { 00:26:33.837 "trtype": "TCP", 00:26:33.837 "adrfam": "IPv4", 00:26:33.837 "traddr": "10.0.0.1", 00:26:33.837 "trsvcid": "44742" 00:26:33.837 }, 00:26:33.837 "auth": { 00:26:33.837 "state": "completed", 00:26:33.837 "digest": "sha256", 00:26:33.837 "dhgroup": "ffdhe8192" 00:26:33.837 } 00:26:33.837 } 00:26:33.837 ]' 00:26:33.837 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:33.837 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:33.837 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:34.095 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:34.095 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:34.095 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:34.095 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:34.095 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:34.353 02:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:26:35.725 02:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:35.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:35.725 02:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:35.725 02:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.725 02:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.725 02:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.725 02:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:35.725 02:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:35.725 02:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:35.725 02:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:37.094 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:37.094 { 00:26:37.094 "cntlid": 47, 00:26:37.094 "qid": 0, 00:26:37.094 "state": "enabled", 00:26:37.094 "thread": "nvmf_tgt_poll_group_000", 00:26:37.094 "listen_address": { 00:26:37.094 "trtype": "TCP", 00:26:37.094 "adrfam": "IPv4", 00:26:37.094 "traddr": "10.0.0.2", 00:26:37.094 "trsvcid": "4420" 00:26:37.094 }, 00:26:37.094 "peer_address": { 00:26:37.094 "trtype": "TCP", 00:26:37.094 "adrfam": "IPv4", 00:26:37.094 "traddr": "10.0.0.1", 00:26:37.094 "trsvcid": "44780" 00:26:37.094 }, 00:26:37.094 "auth": { 00:26:37.094 "state": "completed", 00:26:37.094 "digest": "sha256", 00:26:37.094 "dhgroup": "ffdhe8192" 00:26:37.094 } 00:26:37.094 } 00:26:37.094 ]' 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:37.094 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:37.351 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:37.351 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:37.351 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:37.608 02:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:26:38.980 02:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:38.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.980 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.546 00:26:39.546 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:39.546 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:39.546 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:39.804 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.804 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:39.804 02:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.804 02:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:39.804 02:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.804 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:39.804 { 00:26:39.804 "cntlid": 49, 00:26:39.804 "qid": 0, 00:26:39.804 "state": "enabled", 00:26:39.804 "thread": "nvmf_tgt_poll_group_000", 00:26:39.804 "listen_address": { 00:26:39.804 "trtype": "TCP", 00:26:39.804 "adrfam": "IPv4", 00:26:39.804 "traddr": "10.0.0.2", 00:26:39.804 "trsvcid": "4420" 00:26:39.804 }, 00:26:39.804 "peer_address": { 00:26:39.804 "trtype": "TCP", 00:26:39.804 "adrfam": "IPv4", 00:26:39.804 "traddr": "10.0.0.1", 00:26:39.804 "trsvcid": "37046" 00:26:39.804 }, 00:26:39.804 "auth": { 00:26:39.804 "state": "completed", 00:26:39.804 "digest": "sha384", 00:26:39.804 "dhgroup": "null" 00:26:39.804 } 00:26:39.804 } 00:26:39.804 ]' 00:26:39.804 02:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:39.804 02:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:39.804 02:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:39.804 02:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:39.804 02:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:39.804 02:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:39.804 02:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:39.804 02:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:40.062 02:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:26:41.436 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:41.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:41.436 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:41.436 02:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.436 02:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.436 02:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.436 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:41.436 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:41.436 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.694 02:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.951 00:26:41.951 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:41.951 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:41.951 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:42.209 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.209 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:42.209 02:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.209 02:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:42.209 02:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.209 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:42.209 { 00:26:42.209 "cntlid": 51, 00:26:42.209 "qid": 0, 00:26:42.209 "state": "enabled", 00:26:42.209 "thread": "nvmf_tgt_poll_group_000", 00:26:42.209 "listen_address": { 00:26:42.209 "trtype": "TCP", 00:26:42.209 "adrfam": "IPv4", 00:26:42.209 "traddr": "10.0.0.2", 00:26:42.209 "trsvcid": "4420" 00:26:42.209 }, 00:26:42.209 "peer_address": { 00:26:42.209 "trtype": "TCP", 00:26:42.209 "adrfam": "IPv4", 00:26:42.209 "traddr": "10.0.0.1", 00:26:42.209 "trsvcid": "37068" 00:26:42.209 }, 00:26:42.209 "auth": { 00:26:42.209 "state": "completed", 00:26:42.209 "digest": "sha384", 00:26:42.209 "dhgroup": "null" 00:26:42.209 } 00:26:42.209 } 00:26:42.209 ]' 00:26:42.209 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:42.209 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:42.209 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:42.466 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:42.466 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:42.466 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:42.466 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:42.466 02:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:42.724 02:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:44.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:26:44.098 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.099 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.664 00:26:44.664 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:44.664 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:44.664 02:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:44.923 { 00:26:44.923 "cntlid": 53, 00:26:44.923 "qid": 0, 00:26:44.923 "state": "enabled", 00:26:44.923 "thread": "nvmf_tgt_poll_group_000", 00:26:44.923 "listen_address": { 00:26:44.923 "trtype": "TCP", 00:26:44.923 "adrfam": "IPv4", 00:26:44.923 "traddr": "10.0.0.2", 00:26:44.923 "trsvcid": "4420" 00:26:44.923 }, 00:26:44.923 "peer_address": { 00:26:44.923 "trtype": "TCP", 00:26:44.923 "adrfam": "IPv4", 00:26:44.923 "traddr": "10.0.0.1", 00:26:44.923 "trsvcid": "37088" 00:26:44.923 }, 00:26:44.923 "auth": { 00:26:44.923 "state": "completed", 00:26:44.923 "digest": "sha384", 00:26:44.923 "dhgroup": "null" 00:26:44.923 } 00:26:44.923 } 00:26:44.923 ]' 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:44.923 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:45.182 02:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:26:46.554 02:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:46.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:46.554 02:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:46.554 02:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.554 02:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:46.554 02:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.554 02:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:46.554 02:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:46.554 02:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:46.812 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:47.070 00:26:47.070 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:47.070 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:47.070 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:47.352 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.352 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:47.352 02:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.352 02:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:47.352 02:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.352 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:47.352 { 00:26:47.352 "cntlid": 55, 00:26:47.352 "qid": 0, 00:26:47.352 "state": "enabled", 00:26:47.352 "thread": "nvmf_tgt_poll_group_000", 00:26:47.352 "listen_address": { 00:26:47.352 "trtype": "TCP", 00:26:47.352 "adrfam": "IPv4", 00:26:47.352 "traddr": "10.0.0.2", 00:26:47.352 "trsvcid": "4420" 00:26:47.352 }, 00:26:47.352 "peer_address": { 00:26:47.352 "trtype": "TCP", 00:26:47.352 "adrfam": "IPv4", 00:26:47.352 "traddr": "10.0.0.1", 00:26:47.352 "trsvcid": "50792" 00:26:47.352 }, 00:26:47.352 "auth": { 00:26:47.352 "state": "completed", 00:26:47.352 "digest": "sha384", 00:26:47.352 "dhgroup": "null" 00:26:47.352 } 00:26:47.352 } 00:26:47.352 ]' 00:26:47.352 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:47.641 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:47.641 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:47.641 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:26:47.641 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:47.641 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:47.641 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:47.641 02:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:47.899 02:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:49.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.273 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.837 00:26:49.837 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:49.837 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:49.837 02:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:50.096 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.096 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:50.096 02:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.096 02:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:50.096 02:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.096 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:50.097 { 00:26:50.097 "cntlid": 57, 00:26:50.097 "qid": 0, 00:26:50.097 "state": "enabled", 00:26:50.097 "thread": "nvmf_tgt_poll_group_000", 00:26:50.097 "listen_address": { 00:26:50.097 "trtype": "TCP", 00:26:50.097 "adrfam": "IPv4", 00:26:50.097 "traddr": "10.0.0.2", 00:26:50.097 "trsvcid": "4420" 00:26:50.097 }, 00:26:50.097 "peer_address": { 00:26:50.097 "trtype": "TCP", 00:26:50.097 "adrfam": "IPv4", 00:26:50.097 "traddr": "10.0.0.1", 00:26:50.097 "trsvcid": "50828" 00:26:50.097 }, 00:26:50.097 "auth": { 00:26:50.097 "state": "completed", 00:26:50.097 "digest": "sha384", 00:26:50.097 "dhgroup": "ffdhe2048" 00:26:50.097 } 00:26:50.097 } 00:26:50.097 ]' 00:26:50.097 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:50.097 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:50.097 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:50.097 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:50.097 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:50.097 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:50.097 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:50.097 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:50.355 02:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:26:51.731 02:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:51.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:51.731 02:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:51.731 02:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.731 02:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.731 02:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.731 02:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:51.731 02:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:51.731 02:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.989 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.247 00:26:52.247 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:52.247 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:52.247 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:52.504 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.504 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:52.504 02:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.504 02:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:52.504 02:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.504 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:52.504 { 00:26:52.504 "cntlid": 59, 00:26:52.504 "qid": 0, 00:26:52.504 "state": "enabled", 00:26:52.504 "thread": "nvmf_tgt_poll_group_000", 00:26:52.504 "listen_address": { 00:26:52.504 "trtype": "TCP", 00:26:52.504 "adrfam": "IPv4", 00:26:52.504 "traddr": "10.0.0.2", 00:26:52.504 "trsvcid": "4420" 00:26:52.504 }, 00:26:52.504 "peer_address": { 00:26:52.504 "trtype": "TCP", 00:26:52.504 "adrfam": "IPv4", 00:26:52.504 "traddr": "10.0.0.1", 00:26:52.504 "trsvcid": "50858" 00:26:52.504 }, 00:26:52.504 "auth": { 00:26:52.504 "state": "completed", 00:26:52.504 "digest": "sha384", 00:26:52.504 "dhgroup": "ffdhe2048" 00:26:52.504 } 00:26:52.504 } 00:26:52.504 ]' 00:26:52.504 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:52.763 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:52.763 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:52.763 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:52.763 02:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:52.763 02:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:52.763 02:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:52.763 02:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:53.021 02:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:54.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.392 02:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.956 00:26:54.956 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:54.956 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:54.956 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:55.213 { 00:26:55.213 "cntlid": 61, 00:26:55.213 "qid": 0, 00:26:55.213 "state": "enabled", 00:26:55.213 "thread": "nvmf_tgt_poll_group_000", 00:26:55.213 "listen_address": { 00:26:55.213 "trtype": "TCP", 00:26:55.213 "adrfam": "IPv4", 00:26:55.213 "traddr": "10.0.0.2", 00:26:55.213 "trsvcid": "4420" 00:26:55.213 }, 00:26:55.213 "peer_address": { 00:26:55.213 "trtype": "TCP", 00:26:55.213 "adrfam": "IPv4", 00:26:55.213 "traddr": "10.0.0.1", 00:26:55.213 "trsvcid": "50866" 00:26:55.213 }, 00:26:55.213 "auth": { 00:26:55.213 "state": "completed", 00:26:55.213 "digest": "sha384", 00:26:55.213 "dhgroup": "ffdhe2048" 00:26:55.213 } 00:26:55.213 } 00:26:55.213 ]' 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:55.213 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:55.780 02:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:26:56.714 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:56.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:56.714 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:56.714 02:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.714 02:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:56.714 02:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.714 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:56.714 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:56.714 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:56.972 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:57.538 00:26:57.538 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:57.538 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:57.538 02:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:26:57.796 { 00:26:57.796 "cntlid": 63, 00:26:57.796 "qid": 0, 00:26:57.796 "state": "enabled", 00:26:57.796 "thread": "nvmf_tgt_poll_group_000", 00:26:57.796 "listen_address": { 00:26:57.796 "trtype": "TCP", 00:26:57.796 "adrfam": "IPv4", 00:26:57.796 "traddr": "10.0.0.2", 00:26:57.796 "trsvcid": "4420" 00:26:57.796 }, 00:26:57.796 "peer_address": { 00:26:57.796 "trtype": "TCP", 00:26:57.796 "adrfam": "IPv4", 00:26:57.796 "traddr": "10.0.0.1", 00:26:57.796 "trsvcid": "58146" 00:26:57.796 }, 00:26:57.796 "auth": { 00:26:57.796 "state": "completed", 00:26:57.796 "digest": "sha384", 00:26:57.796 "dhgroup": "ffdhe2048" 00:26:57.796 } 00:26:57.796 } 00:26:57.796 ]' 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:57.796 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:58.055 02:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:26:59.429 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:59.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:59.429 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:59.429 02:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.429 02:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:59.429 02:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.429 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.429 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:26:59.429 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:59.429 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.688 02:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.946 00:26:59.946 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:26:59.946 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:26:59.946 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:00.205 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.205 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:00.205 02:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.205 02:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:00.463 { 00:27:00.463 "cntlid": 65, 00:27:00.463 "qid": 0, 00:27:00.463 "state": "enabled", 00:27:00.463 "thread": "nvmf_tgt_poll_group_000", 00:27:00.463 "listen_address": { 00:27:00.463 "trtype": "TCP", 00:27:00.463 "adrfam": "IPv4", 00:27:00.463 "traddr": "10.0.0.2", 00:27:00.463 "trsvcid": "4420" 00:27:00.463 }, 00:27:00.463 "peer_address": { 00:27:00.463 "trtype": "TCP", 00:27:00.463 "adrfam": "IPv4", 00:27:00.463 "traddr": "10.0.0.1", 00:27:00.463 "trsvcid": "58184" 00:27:00.463 }, 00:27:00.463 "auth": { 00:27:00.463 "state": "completed", 00:27:00.463 "digest": "sha384", 00:27:00.463 "dhgroup": "ffdhe3072" 00:27:00.463 } 00:27:00.463 } 00:27:00.463 ]' 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:00.463 02:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:00.721 02:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:27:02.097 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:02.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:02.097 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:02.097 02:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.097 02:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.097 02:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.097 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:02.097 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:02.097 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.355 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.614 00:27:02.614 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:02.614 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:02.614 02:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:02.872 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.872 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:02.872 02:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.872 02:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:02.872 02:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.872 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:02.872 { 00:27:02.872 "cntlid": 67, 00:27:02.872 "qid": 0, 00:27:02.872 "state": "enabled", 00:27:02.872 "thread": "nvmf_tgt_poll_group_000", 00:27:02.872 "listen_address": { 00:27:02.872 "trtype": "TCP", 00:27:02.872 "adrfam": "IPv4", 00:27:02.872 "traddr": "10.0.0.2", 00:27:02.872 "trsvcid": "4420" 00:27:02.872 }, 00:27:02.872 "peer_address": { 00:27:02.872 "trtype": "TCP", 00:27:02.872 "adrfam": "IPv4", 00:27:02.872 "traddr": "10.0.0.1", 00:27:02.872 "trsvcid": "58202" 00:27:02.872 }, 00:27:02.872 "auth": { 00:27:02.872 "state": "completed", 00:27:02.872 "digest": "sha384", 00:27:02.872 "dhgroup": "ffdhe3072" 00:27:02.872 } 00:27:02.872 } 00:27:02.872 ]' 00:27:02.872 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:02.872 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:02.872 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:03.130 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:03.130 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:03.130 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:03.130 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:03.130 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:03.388 02:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:27:04.761 02:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:04.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:04.761 02:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:04.761 02:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.761 02:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:04.761 02:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.761 02:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:04.761 02:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:04.761 02:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.019 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.277 00:27:05.277 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:05.277 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:05.277 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:05.535 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.535 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:05.535 02:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.535 02:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:05.535 02:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.535 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:05.535 { 00:27:05.535 "cntlid": 69, 00:27:05.535 "qid": 0, 00:27:05.535 "state": "enabled", 00:27:05.535 "thread": "nvmf_tgt_poll_group_000", 00:27:05.535 "listen_address": { 00:27:05.535 "trtype": "TCP", 00:27:05.535 "adrfam": "IPv4", 00:27:05.535 "traddr": "10.0.0.2", 00:27:05.535 "trsvcid": "4420" 00:27:05.535 }, 00:27:05.535 "peer_address": { 00:27:05.535 "trtype": "TCP", 00:27:05.535 "adrfam": "IPv4", 00:27:05.535 "traddr": "10.0.0.1", 00:27:05.535 "trsvcid": "58226" 00:27:05.535 }, 00:27:05.535 "auth": { 00:27:05.535 "state": "completed", 00:27:05.535 "digest": "sha384", 00:27:05.535 "dhgroup": "ffdhe3072" 00:27:05.535 } 00:27:05.535 } 00:27:05.535 ]' 00:27:05.535 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:05.535 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:05.793 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:05.793 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:05.793 02:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:05.793 02:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:05.793 02:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:05.793 02:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:06.051 02:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:07.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:07.424 02:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:07.990 00:27:07.990 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:07.990 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:07.990 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:07.990 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.990 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:07.990 02:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.990 02:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:08.247 02:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.248 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:08.248 { 00:27:08.248 "cntlid": 71, 00:27:08.248 "qid": 0, 00:27:08.248 "state": "enabled", 00:27:08.248 "thread": "nvmf_tgt_poll_group_000", 00:27:08.248 "listen_address": { 00:27:08.248 "trtype": "TCP", 00:27:08.248 "adrfam": "IPv4", 00:27:08.248 "traddr": "10.0.0.2", 00:27:08.248 "trsvcid": "4420" 00:27:08.248 }, 00:27:08.248 "peer_address": { 00:27:08.248 "trtype": "TCP", 00:27:08.248 "adrfam": "IPv4", 00:27:08.248 "traddr": "10.0.0.1", 00:27:08.248 "trsvcid": "60798" 00:27:08.248 }, 00:27:08.248 "auth": { 00:27:08.248 "state": "completed", 00:27:08.248 "digest": "sha384", 00:27:08.248 "dhgroup": "ffdhe3072" 00:27:08.248 } 00:27:08.248 } 00:27:08.248 ]' 00:27:08.248 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:08.248 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:08.248 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:08.248 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:08.248 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:08.248 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:08.248 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:08.248 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:08.523 02:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:27:09.944 02:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:09.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:09.944 02:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:09.944 02:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.944 02:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:09.944 02:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.944 02:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.944 02:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:09.944 02:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:09.944 02:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.944 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.511 00:27:10.511 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:10.511 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:10.511 02:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:10.769 { 00:27:10.769 "cntlid": 73, 00:27:10.769 "qid": 0, 00:27:10.769 "state": "enabled", 00:27:10.769 "thread": "nvmf_tgt_poll_group_000", 00:27:10.769 "listen_address": { 00:27:10.769 "trtype": "TCP", 00:27:10.769 "adrfam": "IPv4", 00:27:10.769 "traddr": "10.0.0.2", 00:27:10.769 "trsvcid": "4420" 00:27:10.769 }, 00:27:10.769 "peer_address": { 00:27:10.769 "trtype": "TCP", 00:27:10.769 "adrfam": "IPv4", 00:27:10.769 "traddr": "10.0.0.1", 00:27:10.769 "trsvcid": "60816" 00:27:10.769 }, 00:27:10.769 "auth": { 00:27:10.769 "state": "completed", 00:27:10.769 "digest": "sha384", 00:27:10.769 "dhgroup": "ffdhe4096" 00:27:10.769 } 00:27:10.769 } 00:27:10.769 ]' 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:10.769 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:11.336 02:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:27:12.271 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:12.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:12.271 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:12.271 02:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.271 02:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.271 02:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.271 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:12.271 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:12.271 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.837 02:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.095 00:27:13.095 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:13.095 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:13.095 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:13.352 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.352 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:13.352 02:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.352 02:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:13.352 02:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.352 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:13.352 { 00:27:13.352 "cntlid": 75, 00:27:13.352 "qid": 0, 00:27:13.352 "state": "enabled", 00:27:13.352 "thread": "nvmf_tgt_poll_group_000", 00:27:13.352 "listen_address": { 00:27:13.352 "trtype": "TCP", 00:27:13.352 "adrfam": "IPv4", 00:27:13.352 "traddr": "10.0.0.2", 00:27:13.352 "trsvcid": "4420" 00:27:13.352 }, 00:27:13.352 "peer_address": { 00:27:13.352 "trtype": "TCP", 00:27:13.352 "adrfam": "IPv4", 00:27:13.352 "traddr": "10.0.0.1", 00:27:13.352 "trsvcid": "60838" 00:27:13.352 }, 00:27:13.352 "auth": { 00:27:13.352 "state": "completed", 00:27:13.352 "digest": "sha384", 00:27:13.352 "dhgroup": "ffdhe4096" 00:27:13.352 } 00:27:13.352 } 00:27:13.352 ]' 00:27:13.352 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:13.352 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:13.352 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:13.608 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:13.608 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:13.608 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:13.608 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:13.608 02:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:13.865 02:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:15.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.235 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.236 02:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.800 00:27:15.800 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:15.800 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:15.800 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:16.057 { 00:27:16.057 "cntlid": 77, 00:27:16.057 "qid": 0, 00:27:16.057 "state": "enabled", 00:27:16.057 "thread": "nvmf_tgt_poll_group_000", 00:27:16.057 "listen_address": { 00:27:16.057 "trtype": "TCP", 00:27:16.057 "adrfam": "IPv4", 00:27:16.057 "traddr": "10.0.0.2", 00:27:16.057 "trsvcid": "4420" 00:27:16.057 }, 00:27:16.057 "peer_address": { 00:27:16.057 "trtype": "TCP", 00:27:16.057 "adrfam": "IPv4", 00:27:16.057 "traddr": "10.0.0.1", 00:27:16.057 "trsvcid": "60872" 00:27:16.057 }, 00:27:16.057 "auth": { 00:27:16.057 "state": "completed", 00:27:16.057 "digest": "sha384", 00:27:16.057 "dhgroup": "ffdhe4096" 00:27:16.057 } 00:27:16.057 } 00:27:16.057 ]' 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:16.057 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:16.315 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:16.315 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:16.315 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:16.572 02:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:27:17.946 02:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:17.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:17.946 02:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:17.946 02:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.946 02:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:17.946 02:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.946 02:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:17.946 02:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:17.946 02:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:17.946 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:18.511 00:27:18.511 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:18.511 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:18.511 02:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:18.768 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.768 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:18.768 02:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.768 02:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.768 02:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.769 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:18.769 { 00:27:18.769 "cntlid": 79, 00:27:18.769 "qid": 0, 00:27:18.769 "state": "enabled", 00:27:18.769 "thread": "nvmf_tgt_poll_group_000", 00:27:18.769 "listen_address": { 00:27:18.769 "trtype": "TCP", 00:27:18.769 "adrfam": "IPv4", 00:27:18.769 "traddr": "10.0.0.2", 00:27:18.769 "trsvcid": "4420" 00:27:18.769 }, 00:27:18.769 "peer_address": { 00:27:18.769 "trtype": "TCP", 00:27:18.769 "adrfam": "IPv4", 00:27:18.769 "traddr": "10.0.0.1", 00:27:18.769 "trsvcid": "38120" 00:27:18.769 }, 00:27:18.769 "auth": { 00:27:18.769 "state": "completed", 00:27:18.769 "digest": "sha384", 00:27:18.769 "dhgroup": "ffdhe4096" 00:27:18.769 } 00:27:18.769 } 00:27:18.769 ]' 00:27:18.769 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:18.769 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:18.769 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:18.769 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:18.769 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:19.026 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:19.026 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:19.026 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:19.284 02:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:20.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.656 02:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.222 00:27:21.222 02:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:21.222 02:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:21.222 02:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:21.788 02:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.788 02:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:21.788 02:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.788 02:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.788 02:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.788 02:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:21.788 { 00:27:21.788 "cntlid": 81, 00:27:21.788 "qid": 0, 00:27:21.788 "state": "enabled", 00:27:21.788 "thread": "nvmf_tgt_poll_group_000", 00:27:21.788 "listen_address": { 00:27:21.788 "trtype": "TCP", 00:27:21.788 "adrfam": "IPv4", 00:27:21.788 "traddr": "10.0.0.2", 00:27:21.788 "trsvcid": "4420" 00:27:21.788 }, 00:27:21.788 "peer_address": { 00:27:21.788 "trtype": "TCP", 00:27:21.788 "adrfam": "IPv4", 00:27:21.788 "traddr": "10.0.0.1", 00:27:21.788 "trsvcid": "38136" 00:27:21.788 }, 00:27:21.788 "auth": { 00:27:21.788 "state": "completed", 00:27:21.788 "digest": "sha384", 00:27:21.788 "dhgroup": "ffdhe6144" 00:27:21.788 } 00:27:21.788 } 00:27:21.788 ]' 00:27:21.788 02:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:21.788 02:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:21.788 02:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:21.788 02:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:21.788 02:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:21.788 02:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:21.788 02:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:21.788 02:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:22.046 02:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:27:23.419 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:23.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:23.419 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:23.419 02:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.419 02:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:23.419 02:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.419 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:23.419 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:23.419 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.676 02:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.239 00:27:24.240 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:24.240 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:24.240 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:24.497 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.497 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:24.497 02:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.497 02:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:24.497 02:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.497 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:24.497 { 00:27:24.497 "cntlid": 83, 00:27:24.497 "qid": 0, 00:27:24.497 "state": "enabled", 00:27:24.497 "thread": "nvmf_tgt_poll_group_000", 00:27:24.497 "listen_address": { 00:27:24.497 "trtype": "TCP", 00:27:24.497 "adrfam": "IPv4", 00:27:24.498 "traddr": "10.0.0.2", 00:27:24.498 "trsvcid": "4420" 00:27:24.498 }, 00:27:24.498 "peer_address": { 00:27:24.498 "trtype": "TCP", 00:27:24.498 "adrfam": "IPv4", 00:27:24.498 "traddr": "10.0.0.1", 00:27:24.498 "trsvcid": "38164" 00:27:24.498 }, 00:27:24.498 "auth": { 00:27:24.498 "state": "completed", 00:27:24.498 "digest": "sha384", 00:27:24.498 "dhgroup": "ffdhe6144" 00:27:24.498 } 00:27:24.498 } 00:27:24.498 ]' 00:27:24.498 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:24.498 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:24.498 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:24.498 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:24.498 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:24.498 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:24.498 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:24.498 02:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:24.756 02:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:27:26.128 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:26.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:26.128 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:26.128 02:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.128 02:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:26.128 02:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.128 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:26.128 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:26.128 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.385 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.386 02:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.951 00:27:26.951 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:26.951 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:26.951 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:27.209 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.209 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:27.209 02:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.209 02:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.209 02:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.209 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:27.209 { 00:27:27.209 "cntlid": 85, 00:27:27.209 "qid": 0, 00:27:27.209 "state": "enabled", 00:27:27.209 "thread": "nvmf_tgt_poll_group_000", 00:27:27.209 "listen_address": { 00:27:27.209 "trtype": "TCP", 00:27:27.209 "adrfam": "IPv4", 00:27:27.209 "traddr": "10.0.0.2", 00:27:27.209 "trsvcid": "4420" 00:27:27.209 }, 00:27:27.209 "peer_address": { 00:27:27.209 "trtype": "TCP", 00:27:27.209 "adrfam": "IPv4", 00:27:27.209 "traddr": "10.0.0.1", 00:27:27.209 "trsvcid": "38200" 00:27:27.209 }, 00:27:27.209 "auth": { 00:27:27.209 "state": "completed", 00:27:27.209 "digest": "sha384", 00:27:27.209 "dhgroup": "ffdhe6144" 00:27:27.209 } 00:27:27.209 } 00:27:27.209 ]' 00:27:27.209 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:27.209 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:27.209 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:27.467 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:27.467 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:27.467 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:27.467 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:27.467 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:27.725 02:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:29.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.134 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:29.135 02:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:29.699 00:27:29.958 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:29.958 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:29.958 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:30.216 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.216 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:30.216 02:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.216 02:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.216 02:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.216 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:30.216 { 00:27:30.216 "cntlid": 87, 00:27:30.216 "qid": 0, 00:27:30.216 "state": "enabled", 00:27:30.216 "thread": "nvmf_tgt_poll_group_000", 00:27:30.216 "listen_address": { 00:27:30.216 "trtype": "TCP", 00:27:30.216 "adrfam": "IPv4", 00:27:30.216 "traddr": "10.0.0.2", 00:27:30.216 "trsvcid": "4420" 00:27:30.216 }, 00:27:30.216 "peer_address": { 00:27:30.216 "trtype": "TCP", 00:27:30.216 "adrfam": "IPv4", 00:27:30.216 "traddr": "10.0.0.1", 00:27:30.216 "trsvcid": "49170" 00:27:30.216 }, 00:27:30.216 "auth": { 00:27:30.216 "state": "completed", 00:27:30.216 "digest": "sha384", 00:27:30.216 "dhgroup": "ffdhe6144" 00:27:30.217 } 00:27:30.217 } 00:27:30.217 ]' 00:27:30.217 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:30.217 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:30.217 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:30.217 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:30.217 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:30.217 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:30.217 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:30.217 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:30.475 02:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:27:31.850 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:31.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:31.850 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:31.850 02:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.850 02:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:31.850 02:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.850 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.850 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:31.850 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:31.850 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.108 02:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.042 00:27:33.042 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:33.042 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:33.042 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:33.305 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.306 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:33.306 02:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.306 02:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:33.306 02:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.306 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:33.306 { 00:27:33.306 "cntlid": 89, 00:27:33.306 "qid": 0, 00:27:33.306 "state": "enabled", 00:27:33.306 "thread": "nvmf_tgt_poll_group_000", 00:27:33.306 "listen_address": { 00:27:33.306 "trtype": "TCP", 00:27:33.306 "adrfam": "IPv4", 00:27:33.306 "traddr": "10.0.0.2", 00:27:33.306 "trsvcid": "4420" 00:27:33.306 }, 00:27:33.306 "peer_address": { 00:27:33.306 "trtype": "TCP", 00:27:33.306 "adrfam": "IPv4", 00:27:33.306 "traddr": "10.0.0.1", 00:27:33.306 "trsvcid": "49194" 00:27:33.306 }, 00:27:33.306 "auth": { 00:27:33.306 "state": "completed", 00:27:33.306 "digest": "sha384", 00:27:33.306 "dhgroup": "ffdhe8192" 00:27:33.306 } 00:27:33.306 } 00:27:33.306 ]' 00:27:33.306 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:33.306 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:33.306 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:33.563 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:33.563 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:33.563 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:33.563 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:33.563 02:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:33.821 02:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:27:35.196 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:35.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:35.196 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:35.196 02:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.196 02:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:35.196 02:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.196 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:35.196 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.196 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.455 02:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.389 00:27:36.389 02:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:36.389 02:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:36.389 02:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:36.647 02:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.647 02:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:36.647 02:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.647 02:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.647 02:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.647 02:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:36.647 { 00:27:36.647 "cntlid": 91, 00:27:36.647 "qid": 0, 00:27:36.647 "state": "enabled", 00:27:36.647 "thread": "nvmf_tgt_poll_group_000", 00:27:36.647 "listen_address": { 00:27:36.647 "trtype": "TCP", 00:27:36.647 "adrfam": "IPv4", 00:27:36.647 "traddr": "10.0.0.2", 00:27:36.647 "trsvcid": "4420" 00:27:36.647 }, 00:27:36.647 "peer_address": { 00:27:36.647 "trtype": "TCP", 00:27:36.647 "adrfam": "IPv4", 00:27:36.647 "traddr": "10.0.0.1", 00:27:36.647 "trsvcid": "49224" 00:27:36.647 }, 00:27:36.647 "auth": { 00:27:36.647 "state": "completed", 00:27:36.647 "digest": "sha384", 00:27:36.647 "dhgroup": "ffdhe8192" 00:27:36.647 } 00:27:36.647 } 00:27:36.647 ]' 00:27:36.647 02:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:36.647 02:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:36.647 02:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:36.647 02:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:36.647 02:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:36.906 02:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:36.906 02:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:36.906 02:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:37.164 02:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:27:38.539 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:38.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.540 02:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.914 00:27:39.914 02:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:39.914 02:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:39.914 02:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:39.914 { 00:27:39.914 "cntlid": 93, 00:27:39.914 "qid": 0, 00:27:39.914 "state": "enabled", 00:27:39.914 "thread": "nvmf_tgt_poll_group_000", 00:27:39.914 "listen_address": { 00:27:39.914 "trtype": "TCP", 00:27:39.914 "adrfam": "IPv4", 00:27:39.914 "traddr": "10.0.0.2", 00:27:39.914 "trsvcid": "4420" 00:27:39.914 }, 00:27:39.914 "peer_address": { 00:27:39.914 "trtype": "TCP", 00:27:39.914 "adrfam": "IPv4", 00:27:39.914 "traddr": "10.0.0.1", 00:27:39.914 "trsvcid": "56250" 00:27:39.914 }, 00:27:39.914 "auth": { 00:27:39.914 "state": "completed", 00:27:39.914 "digest": "sha384", 00:27:39.914 "dhgroup": "ffdhe8192" 00:27:39.914 } 00:27:39.914 } 00:27:39.914 ]' 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:39.914 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:40.172 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:40.172 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:40.172 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:40.431 02:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:27:41.809 02:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:41.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:41.809 02:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:41.809 02:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.809 02:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.809 02:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.809 02:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:41.809 02:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:41.809 02:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:41.809 02:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:43.183 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:43.183 { 00:27:43.183 "cntlid": 95, 00:27:43.183 "qid": 0, 00:27:43.183 "state": "enabled", 00:27:43.183 "thread": "nvmf_tgt_poll_group_000", 00:27:43.183 "listen_address": { 00:27:43.183 "trtype": "TCP", 00:27:43.183 "adrfam": "IPv4", 00:27:43.183 "traddr": "10.0.0.2", 00:27:43.183 "trsvcid": "4420" 00:27:43.183 }, 00:27:43.183 "peer_address": { 00:27:43.183 "trtype": "TCP", 00:27:43.183 "adrfam": "IPv4", 00:27:43.183 "traddr": "10.0.0.1", 00:27:43.183 "trsvcid": "56288" 00:27:43.183 }, 00:27:43.183 "auth": { 00:27:43.183 "state": "completed", 00:27:43.183 "digest": "sha384", 00:27:43.183 "dhgroup": "ffdhe8192" 00:27:43.183 } 00:27:43.183 } 00:27:43.183 ]' 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:27:43.183 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:43.439 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:43.439 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:43.439 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:43.695 02:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:45.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.068 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.634 00:27:45.634 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:45.634 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:45.634 02:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:45.892 { 00:27:45.892 "cntlid": 97, 00:27:45.892 "qid": 0, 00:27:45.892 "state": "enabled", 00:27:45.892 "thread": "nvmf_tgt_poll_group_000", 00:27:45.892 "listen_address": { 00:27:45.892 "trtype": "TCP", 00:27:45.892 "adrfam": "IPv4", 00:27:45.892 "traddr": "10.0.0.2", 00:27:45.892 "trsvcid": "4420" 00:27:45.892 }, 00:27:45.892 "peer_address": { 00:27:45.892 "trtype": "TCP", 00:27:45.892 "adrfam": "IPv4", 00:27:45.892 "traddr": "10.0.0.1", 00:27:45.892 "trsvcid": "56308" 00:27:45.892 }, 00:27:45.892 "auth": { 00:27:45.892 "state": "completed", 00:27:45.892 "digest": "sha512", 00:27:45.892 "dhgroup": "null" 00:27:45.892 } 00:27:45.892 } 00:27:45.892 ]' 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:45.892 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:46.151 02:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:27:47.526 02:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:47.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:47.526 02:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:47.526 02:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.526 02:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.526 02:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.526 02:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:47.526 02:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:47.526 02:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.784 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.042 00:27:48.042 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:48.042 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:48.042 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:48.299 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.299 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:48.299 02:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.299 02:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:48.299 02:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.299 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:48.299 { 00:27:48.299 "cntlid": 99, 00:27:48.299 "qid": 0, 00:27:48.299 "state": "enabled", 00:27:48.299 "thread": "nvmf_tgt_poll_group_000", 00:27:48.299 "listen_address": { 00:27:48.299 "trtype": "TCP", 00:27:48.299 "adrfam": "IPv4", 00:27:48.299 "traddr": "10.0.0.2", 00:27:48.299 "trsvcid": "4420" 00:27:48.299 }, 00:27:48.299 "peer_address": { 00:27:48.299 "trtype": "TCP", 00:27:48.299 "adrfam": "IPv4", 00:27:48.299 "traddr": "10.0.0.1", 00:27:48.299 "trsvcid": "43194" 00:27:48.299 }, 00:27:48.299 "auth": { 00:27:48.299 "state": "completed", 00:27:48.299 "digest": "sha512", 00:27:48.299 "dhgroup": "null" 00:27:48.299 } 00:27:48.299 } 00:27:48.299 ]' 00:27:48.299 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:48.557 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:48.557 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:48.557 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:48.557 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:48.557 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:48.557 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:48.557 02:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:48.814 02:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:27:50.235 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:50.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:50.235 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.236 02:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.802 00:27:50.802 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:50.802 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:50.802 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:51.060 { 00:27:51.060 "cntlid": 101, 00:27:51.060 "qid": 0, 00:27:51.060 "state": "enabled", 00:27:51.060 "thread": "nvmf_tgt_poll_group_000", 00:27:51.060 "listen_address": { 00:27:51.060 "trtype": "TCP", 00:27:51.060 "adrfam": "IPv4", 00:27:51.060 "traddr": "10.0.0.2", 00:27:51.060 "trsvcid": "4420" 00:27:51.060 }, 00:27:51.060 "peer_address": { 00:27:51.060 "trtype": "TCP", 00:27:51.060 "adrfam": "IPv4", 00:27:51.060 "traddr": "10.0.0.1", 00:27:51.060 "trsvcid": "43224" 00:27:51.060 }, 00:27:51.060 "auth": { 00:27:51.060 "state": "completed", 00:27:51.060 "digest": "sha512", 00:27:51.060 "dhgroup": "null" 00:27:51.060 } 00:27:51.060 } 00:27:51.060 ]' 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:51.060 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:51.626 02:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:27:52.559 02:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:52.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:52.559 02:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:52.559 02:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.559 02:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:52.559 02:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.559 02:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:52.559 02:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:52.559 02:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:53.124 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:53.380 00:27:53.380 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:53.380 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:53.380 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:53.638 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.638 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:53.638 02:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.638 02:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.638 02:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.638 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:53.638 { 00:27:53.638 "cntlid": 103, 00:27:53.638 "qid": 0, 00:27:53.638 "state": "enabled", 00:27:53.638 "thread": "nvmf_tgt_poll_group_000", 00:27:53.638 "listen_address": { 00:27:53.638 "trtype": "TCP", 00:27:53.638 "adrfam": "IPv4", 00:27:53.638 "traddr": "10.0.0.2", 00:27:53.638 "trsvcid": "4420" 00:27:53.638 }, 00:27:53.638 "peer_address": { 00:27:53.638 "trtype": "TCP", 00:27:53.638 "adrfam": "IPv4", 00:27:53.638 "traddr": "10.0.0.1", 00:27:53.638 "trsvcid": "43246" 00:27:53.638 }, 00:27:53.638 "auth": { 00:27:53.638 "state": "completed", 00:27:53.638 "digest": "sha512", 00:27:53.638 "dhgroup": "null" 00:27:53.638 } 00:27:53.638 } 00:27:53.638 ]' 00:27:53.638 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:53.638 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:53.638 02:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:53.638 02:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:53.638 02:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:53.638 02:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:53.638 02:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:53.638 02:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:54.202 02:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:27:55.132 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:55.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:55.132 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:55.132 02:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.132 02:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.132 02:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.132 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.132 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:55.132 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:55.132 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.389 02:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.953 00:27:55.953 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:55.953 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:55.953 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:56.211 { 00:27:56.211 "cntlid": 105, 00:27:56.211 "qid": 0, 00:27:56.211 "state": "enabled", 00:27:56.211 "thread": "nvmf_tgt_poll_group_000", 00:27:56.211 "listen_address": { 00:27:56.211 "trtype": "TCP", 00:27:56.211 "adrfam": "IPv4", 00:27:56.211 "traddr": "10.0.0.2", 00:27:56.211 "trsvcid": "4420" 00:27:56.211 }, 00:27:56.211 "peer_address": { 00:27:56.211 "trtype": "TCP", 00:27:56.211 "adrfam": "IPv4", 00:27:56.211 "traddr": "10.0.0.1", 00:27:56.211 "trsvcid": "43268" 00:27:56.211 }, 00:27:56.211 "auth": { 00:27:56.211 "state": "completed", 00:27:56.211 "digest": "sha512", 00:27:56.211 "dhgroup": "ffdhe2048" 00:27:56.211 } 00:27:56.211 } 00:27:56.211 ]' 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:56.211 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:56.469 02:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:27:57.841 02:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:57.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:57.841 02:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:57.841 02:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.841 02:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.841 02:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.841 02:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:57.841 02:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:57.841 02:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.841 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.407 00:27:58.407 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:58.407 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:58.407 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:58.665 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.665 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:58.665 02:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.665 02:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.665 02:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.665 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:58.665 { 00:27:58.665 "cntlid": 107, 00:27:58.665 "qid": 0, 00:27:58.665 "state": "enabled", 00:27:58.665 "thread": "nvmf_tgt_poll_group_000", 00:27:58.665 "listen_address": { 00:27:58.665 "trtype": "TCP", 00:27:58.665 "adrfam": "IPv4", 00:27:58.665 "traddr": "10.0.0.2", 00:27:58.665 "trsvcid": "4420" 00:27:58.665 }, 00:27:58.665 "peer_address": { 00:27:58.665 "trtype": "TCP", 00:27:58.665 "adrfam": "IPv4", 00:27:58.665 "traddr": "10.0.0.1", 00:27:58.665 "trsvcid": "52946" 00:27:58.665 }, 00:27:58.665 "auth": { 00:27:58.665 "state": "completed", 00:27:58.665 "digest": "sha512", 00:27:58.665 "dhgroup": "ffdhe2048" 00:27:58.665 } 00:27:58.665 } 00:27:58.665 ]' 00:27:58.665 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:58.666 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:27:58.666 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:58.666 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:58.666 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:58.666 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:58.666 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:58.666 02:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:58.923 02:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:28:00.296 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:00.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:00.296 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:00.296 02:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.296 02:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.296 02:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.296 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:00.296 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:00.296 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.554 02:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.812 00:28:00.812 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:00.812 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:00.812 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:01.070 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.070 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:01.070 02:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.070 02:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.070 02:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.070 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:01.070 { 00:28:01.070 "cntlid": 109, 00:28:01.070 "qid": 0, 00:28:01.070 "state": "enabled", 00:28:01.070 "thread": "nvmf_tgt_poll_group_000", 00:28:01.070 "listen_address": { 00:28:01.070 "trtype": "TCP", 00:28:01.070 "adrfam": "IPv4", 00:28:01.070 "traddr": "10.0.0.2", 00:28:01.070 "trsvcid": "4420" 00:28:01.070 }, 00:28:01.070 "peer_address": { 00:28:01.070 "trtype": "TCP", 00:28:01.070 "adrfam": "IPv4", 00:28:01.070 "traddr": "10.0.0.1", 00:28:01.070 "trsvcid": "52982" 00:28:01.070 }, 00:28:01.070 "auth": { 00:28:01.070 "state": "completed", 00:28:01.070 "digest": "sha512", 00:28:01.070 "dhgroup": "ffdhe2048" 00:28:01.070 } 00:28:01.070 } 00:28:01.070 ]' 00:28:01.070 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:01.070 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:01.070 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:01.328 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:01.328 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:01.328 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:01.328 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:01.328 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:01.587 02:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:28:02.959 02:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:02.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:02.959 02:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:02.959 02:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.959 02:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:02.959 02:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.959 02:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:02.959 02:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:02.959 02:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:02.959 02:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.960 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:02.960 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:03.529 00:28:03.529 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:03.529 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:03.529 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:03.786 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.786 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:03.786 02:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.786 02:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.786 02:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.786 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:03.786 { 00:28:03.786 "cntlid": 111, 00:28:03.786 "qid": 0, 00:28:03.786 "state": "enabled", 00:28:03.786 "thread": "nvmf_tgt_poll_group_000", 00:28:03.786 "listen_address": { 00:28:03.786 "trtype": "TCP", 00:28:03.786 "adrfam": "IPv4", 00:28:03.786 "traddr": "10.0.0.2", 00:28:03.786 "trsvcid": "4420" 00:28:03.786 }, 00:28:03.786 "peer_address": { 00:28:03.786 "trtype": "TCP", 00:28:03.786 "adrfam": "IPv4", 00:28:03.786 "traddr": "10.0.0.1", 00:28:03.786 "trsvcid": "53008" 00:28:03.786 }, 00:28:03.786 "auth": { 00:28:03.786 "state": "completed", 00:28:03.786 "digest": "sha512", 00:28:03.786 "dhgroup": "ffdhe2048" 00:28:03.786 } 00:28:03.786 } 00:28:03.786 ]' 00:28:03.786 02:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:03.786 02:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:03.786 02:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:03.786 02:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:03.786 02:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:03.786 02:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:03.786 02:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:03.786 02:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:04.043 02:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:28:05.419 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:05.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:05.419 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:05.419 02:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.419 02:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.419 02:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.419 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.419 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:05.419 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:05.419 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.678 02:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.935 00:28:05.935 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:05.935 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:05.935 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:06.193 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.193 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:06.193 02:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.193 02:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.193 02:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.193 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:06.193 { 00:28:06.193 "cntlid": 113, 00:28:06.193 "qid": 0, 00:28:06.193 "state": "enabled", 00:28:06.193 "thread": "nvmf_tgt_poll_group_000", 00:28:06.193 "listen_address": { 00:28:06.193 "trtype": "TCP", 00:28:06.193 "adrfam": "IPv4", 00:28:06.193 "traddr": "10.0.0.2", 00:28:06.193 "trsvcid": "4420" 00:28:06.193 }, 00:28:06.193 "peer_address": { 00:28:06.193 "trtype": "TCP", 00:28:06.193 "adrfam": "IPv4", 00:28:06.193 "traddr": "10.0.0.1", 00:28:06.193 "trsvcid": "53050" 00:28:06.193 }, 00:28:06.193 "auth": { 00:28:06.193 "state": "completed", 00:28:06.193 "digest": "sha512", 00:28:06.193 "dhgroup": "ffdhe3072" 00:28:06.193 } 00:28:06.193 } 00:28:06.193 ]' 00:28:06.193 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:06.451 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:06.451 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:06.451 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:06.451 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:06.451 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:06.451 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:06.451 02:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:06.709 02:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:08.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.084 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.649 00:28:08.649 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:08.649 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:08.649 02:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:08.907 { 00:28:08.907 "cntlid": 115, 00:28:08.907 "qid": 0, 00:28:08.907 "state": "enabled", 00:28:08.907 "thread": "nvmf_tgt_poll_group_000", 00:28:08.907 "listen_address": { 00:28:08.907 "trtype": "TCP", 00:28:08.907 "adrfam": "IPv4", 00:28:08.907 "traddr": "10.0.0.2", 00:28:08.907 "trsvcid": "4420" 00:28:08.907 }, 00:28:08.907 "peer_address": { 00:28:08.907 "trtype": "TCP", 00:28:08.907 "adrfam": "IPv4", 00:28:08.907 "traddr": "10.0.0.1", 00:28:08.907 "trsvcid": "42722" 00:28:08.907 }, 00:28:08.907 "auth": { 00:28:08.907 "state": "completed", 00:28:08.907 "digest": "sha512", 00:28:08.907 "dhgroup": "ffdhe3072" 00:28:08.907 } 00:28:08.907 } 00:28:08.907 ]' 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:08.907 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:09.493 02:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:28:10.433 02:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:10.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:10.433 02:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:10.433 02:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.433 02:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.433 02:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.433 02:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:10.433 02:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.433 02:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.998 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.256 00:28:11.256 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:11.256 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:11.256 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:11.513 { 00:28:11.513 "cntlid": 117, 00:28:11.513 "qid": 0, 00:28:11.513 "state": "enabled", 00:28:11.513 "thread": "nvmf_tgt_poll_group_000", 00:28:11.513 "listen_address": { 00:28:11.513 "trtype": "TCP", 00:28:11.513 "adrfam": "IPv4", 00:28:11.513 "traddr": "10.0.0.2", 00:28:11.513 "trsvcid": "4420" 00:28:11.513 }, 00:28:11.513 "peer_address": { 00:28:11.513 "trtype": "TCP", 00:28:11.513 "adrfam": "IPv4", 00:28:11.513 "traddr": "10.0.0.1", 00:28:11.513 "trsvcid": "42744" 00:28:11.513 }, 00:28:11.513 "auth": { 00:28:11.513 "state": "completed", 00:28:11.513 "digest": "sha512", 00:28:11.513 "dhgroup": "ffdhe3072" 00:28:11.513 } 00:28:11.513 } 00:28:11.513 ]' 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:11.513 02:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:11.770 02:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:28:13.143 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:13.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:13.143 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:13.143 02:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.143 02:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.143 02:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.143 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:13.143 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:13.143 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:13.400 02:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:13.658 00:28:13.658 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:13.658 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:13.658 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:13.916 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.916 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:13.916 02:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.916 02:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.916 02:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.916 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:13.916 { 00:28:13.916 "cntlid": 119, 00:28:13.916 "qid": 0, 00:28:13.916 "state": "enabled", 00:28:13.916 "thread": "nvmf_tgt_poll_group_000", 00:28:13.916 "listen_address": { 00:28:13.916 "trtype": "TCP", 00:28:13.916 "adrfam": "IPv4", 00:28:13.916 "traddr": "10.0.0.2", 00:28:13.916 "trsvcid": "4420" 00:28:13.916 }, 00:28:13.916 "peer_address": { 00:28:13.916 "trtype": "TCP", 00:28:13.916 "adrfam": "IPv4", 00:28:13.916 "traddr": "10.0.0.1", 00:28:13.916 "trsvcid": "42784" 00:28:13.916 }, 00:28:13.916 "auth": { 00:28:13.916 "state": "completed", 00:28:13.916 "digest": "sha512", 00:28:13.916 "dhgroup": "ffdhe3072" 00:28:13.916 } 00:28:13.916 } 00:28:13.916 ]' 00:28:13.916 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:14.175 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:14.175 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:14.175 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:14.175 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:14.175 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:14.175 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:14.175 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:14.433 02:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:28:15.807 02:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:15.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:15.808 02:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:15.808 02:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.808 02:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:15.808 02:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.808 02:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.808 02:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:15.808 02:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:15.808 02:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.066 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.325 00:28:16.325 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:16.325 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:16.325 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:16.582 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.582 02:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:16.582 02:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.582 02:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.582 02:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.841 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:16.841 { 00:28:16.841 "cntlid": 121, 00:28:16.841 "qid": 0, 00:28:16.841 "state": "enabled", 00:28:16.841 "thread": "nvmf_tgt_poll_group_000", 00:28:16.841 "listen_address": { 00:28:16.841 "trtype": "TCP", 00:28:16.841 "adrfam": "IPv4", 00:28:16.841 "traddr": "10.0.0.2", 00:28:16.841 "trsvcid": "4420" 00:28:16.841 }, 00:28:16.841 "peer_address": { 00:28:16.841 "trtype": "TCP", 00:28:16.841 "adrfam": "IPv4", 00:28:16.841 "traddr": "10.0.0.1", 00:28:16.841 "trsvcid": "42806" 00:28:16.841 }, 00:28:16.841 "auth": { 00:28:16.841 "state": "completed", 00:28:16.841 "digest": "sha512", 00:28:16.841 "dhgroup": "ffdhe4096" 00:28:16.841 } 00:28:16.841 } 00:28:16.841 ]' 00:28:16.841 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:16.841 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:16.841 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:16.841 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:16.841 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:16.841 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:16.841 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:16.841 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:17.100 02:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:28:18.472 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:18.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:18.472 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:18.472 02:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.472 02:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:18.472 02:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.472 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:18.472 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:18.472 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.730 02:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.988 00:28:18.988 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:18.988 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:18.988 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:19.555 { 00:28:19.555 "cntlid": 123, 00:28:19.555 "qid": 0, 00:28:19.555 "state": "enabled", 00:28:19.555 "thread": "nvmf_tgt_poll_group_000", 00:28:19.555 "listen_address": { 00:28:19.555 "trtype": "TCP", 00:28:19.555 "adrfam": "IPv4", 00:28:19.555 "traddr": "10.0.0.2", 00:28:19.555 "trsvcid": "4420" 00:28:19.555 }, 00:28:19.555 "peer_address": { 00:28:19.555 "trtype": "TCP", 00:28:19.555 "adrfam": "IPv4", 00:28:19.555 "traddr": "10.0.0.1", 00:28:19.555 "trsvcid": "59128" 00:28:19.555 }, 00:28:19.555 "auth": { 00:28:19.555 "state": "completed", 00:28:19.555 "digest": "sha512", 00:28:19.555 "dhgroup": "ffdhe4096" 00:28:19.555 } 00:28:19.555 } 00:28:19.555 ]' 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:19.555 02:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:19.813 02:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:28:21.187 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:21.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:21.187 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:21.187 02:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.187 02:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.187 02:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.187 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:21.187 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:21.187 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.445 02:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.703 00:28:21.703 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:21.703 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:21.703 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:22.269 { 00:28:22.269 "cntlid": 125, 00:28:22.269 "qid": 0, 00:28:22.269 "state": "enabled", 00:28:22.269 "thread": "nvmf_tgt_poll_group_000", 00:28:22.269 "listen_address": { 00:28:22.269 "trtype": "TCP", 00:28:22.269 "adrfam": "IPv4", 00:28:22.269 "traddr": "10.0.0.2", 00:28:22.269 "trsvcid": "4420" 00:28:22.269 }, 00:28:22.269 "peer_address": { 00:28:22.269 "trtype": "TCP", 00:28:22.269 "adrfam": "IPv4", 00:28:22.269 "traddr": "10.0.0.1", 00:28:22.269 "trsvcid": "59156" 00:28:22.269 }, 00:28:22.269 "auth": { 00:28:22.269 "state": "completed", 00:28:22.269 "digest": "sha512", 00:28:22.269 "dhgroup": "ffdhe4096" 00:28:22.269 } 00:28:22.269 } 00:28:22.269 ]' 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:22.269 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:22.527 02:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:28:23.899 02:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:23.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:23.899 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:24.465 00:28:24.465 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:24.465 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:24.465 02:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:24.724 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.724 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:24.724 02:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.724 02:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.724 02:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.724 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:24.724 { 00:28:24.724 "cntlid": 127, 00:28:24.724 "qid": 0, 00:28:24.724 "state": "enabled", 00:28:24.724 "thread": "nvmf_tgt_poll_group_000", 00:28:24.724 "listen_address": { 00:28:24.724 "trtype": "TCP", 00:28:24.724 "adrfam": "IPv4", 00:28:24.724 "traddr": "10.0.0.2", 00:28:24.724 "trsvcid": "4420" 00:28:24.724 }, 00:28:24.724 "peer_address": { 00:28:24.724 "trtype": "TCP", 00:28:24.724 "adrfam": "IPv4", 00:28:24.724 "traddr": "10.0.0.1", 00:28:24.724 "trsvcid": "59188" 00:28:24.724 }, 00:28:24.724 "auth": { 00:28:24.724 "state": "completed", 00:28:24.724 "digest": "sha512", 00:28:24.724 "dhgroup": "ffdhe4096" 00:28:24.724 } 00:28:24.724 } 00:28:24.724 ]' 00:28:24.724 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:24.724 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:24.724 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:24.982 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:24.982 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:24.982 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:24.982 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:24.982 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:25.240 02:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:28:26.176 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:26.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:26.176 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:26.176 02:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.176 02:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.434 02:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.435 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.435 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:26.435 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:26.435 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.693 02:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.260 00:28:27.260 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:27.260 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:27.260 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:27.518 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.518 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:27.518 02:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.518 02:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.518 02:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.518 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:27.518 { 00:28:27.518 "cntlid": 129, 00:28:27.518 "qid": 0, 00:28:27.518 "state": "enabled", 00:28:27.518 "thread": "nvmf_tgt_poll_group_000", 00:28:27.518 "listen_address": { 00:28:27.518 "trtype": "TCP", 00:28:27.518 "adrfam": "IPv4", 00:28:27.518 "traddr": "10.0.0.2", 00:28:27.518 "trsvcid": "4420" 00:28:27.518 }, 00:28:27.518 "peer_address": { 00:28:27.518 "trtype": "TCP", 00:28:27.518 "adrfam": "IPv4", 00:28:27.518 "traddr": "10.0.0.1", 00:28:27.518 "trsvcid": "58622" 00:28:27.518 }, 00:28:27.518 "auth": { 00:28:27.518 "state": "completed", 00:28:27.518 "digest": "sha512", 00:28:27.518 "dhgroup": "ffdhe6144" 00:28:27.518 } 00:28:27.518 } 00:28:27.518 ]' 00:28:27.518 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:27.518 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:27.518 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:27.776 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:27.776 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:27.776 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:27.776 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:27.776 02:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:28.035 02:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:29.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.450 02:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.028 00:28:30.028 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:30.028 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:30.028 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:30.593 { 00:28:30.593 "cntlid": 131, 00:28:30.593 "qid": 0, 00:28:30.593 "state": "enabled", 00:28:30.593 "thread": "nvmf_tgt_poll_group_000", 00:28:30.593 "listen_address": { 00:28:30.593 "trtype": "TCP", 00:28:30.593 "adrfam": "IPv4", 00:28:30.593 "traddr": "10.0.0.2", 00:28:30.593 "trsvcid": "4420" 00:28:30.593 }, 00:28:30.593 "peer_address": { 00:28:30.593 "trtype": "TCP", 00:28:30.593 "adrfam": "IPv4", 00:28:30.593 "traddr": "10.0.0.1", 00:28:30.593 "trsvcid": "58648" 00:28:30.593 }, 00:28:30.593 "auth": { 00:28:30.593 "state": "completed", 00:28:30.593 "digest": "sha512", 00:28:30.593 "dhgroup": "ffdhe6144" 00:28:30.593 } 00:28:30.593 } 00:28:30.593 ]' 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:30.593 02:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:30.852 02:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:32.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.227 02:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.485 02:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.485 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.485 02:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.050 00:28:33.050 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:33.050 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:33.050 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:33.309 { 00:28:33.309 "cntlid": 133, 00:28:33.309 "qid": 0, 00:28:33.309 "state": "enabled", 00:28:33.309 "thread": "nvmf_tgt_poll_group_000", 00:28:33.309 "listen_address": { 00:28:33.309 "trtype": "TCP", 00:28:33.309 "adrfam": "IPv4", 00:28:33.309 "traddr": "10.0.0.2", 00:28:33.309 "trsvcid": "4420" 00:28:33.309 }, 00:28:33.309 "peer_address": { 00:28:33.309 "trtype": "TCP", 00:28:33.309 "adrfam": "IPv4", 00:28:33.309 "traddr": "10.0.0.1", 00:28:33.309 "trsvcid": "58690" 00:28:33.309 }, 00:28:33.309 "auth": { 00:28:33.309 "state": "completed", 00:28:33.309 "digest": "sha512", 00:28:33.309 "dhgroup": "ffdhe6144" 00:28:33.309 } 00:28:33.309 } 00:28:33.309 ]' 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:33.309 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:33.565 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:33.565 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:33.565 02:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:33.820 02:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:35.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:35.190 02:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:35.755 00:28:35.755 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:35.756 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:35.756 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:36.320 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.320 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:36.320 02:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.320 02:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.320 02:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.320 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:36.320 { 00:28:36.320 "cntlid": 135, 00:28:36.320 "qid": 0, 00:28:36.320 "state": "enabled", 00:28:36.320 "thread": "nvmf_tgt_poll_group_000", 00:28:36.320 "listen_address": { 00:28:36.320 "trtype": "TCP", 00:28:36.320 "adrfam": "IPv4", 00:28:36.320 "traddr": "10.0.0.2", 00:28:36.321 "trsvcid": "4420" 00:28:36.321 }, 00:28:36.321 "peer_address": { 00:28:36.321 "trtype": "TCP", 00:28:36.321 "adrfam": "IPv4", 00:28:36.321 "traddr": "10.0.0.1", 00:28:36.321 "trsvcid": "58704" 00:28:36.321 }, 00:28:36.321 "auth": { 00:28:36.321 "state": "completed", 00:28:36.321 "digest": "sha512", 00:28:36.321 "dhgroup": "ffdhe6144" 00:28:36.321 } 00:28:36.321 } 00:28:36.321 ]' 00:28:36.321 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:36.321 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:36.321 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:36.321 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:36.321 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:36.321 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:36.321 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:36.321 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:36.578 02:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:28:37.950 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:37.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:37.950 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:37.950 02:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.950 02:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:37.950 02:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.950 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:37.950 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:37.950 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:37.950 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.207 02:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.141 00:28:39.141 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:39.141 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:39.141 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:39.399 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:39.400 { 00:28:39.400 "cntlid": 137, 00:28:39.400 "qid": 0, 00:28:39.400 "state": "enabled", 00:28:39.400 "thread": "nvmf_tgt_poll_group_000", 00:28:39.400 "listen_address": { 00:28:39.400 "trtype": "TCP", 00:28:39.400 "adrfam": "IPv4", 00:28:39.400 "traddr": "10.0.0.2", 00:28:39.400 "trsvcid": "4420" 00:28:39.400 }, 00:28:39.400 "peer_address": { 00:28:39.400 "trtype": "TCP", 00:28:39.400 "adrfam": "IPv4", 00:28:39.400 "traddr": "10.0.0.1", 00:28:39.400 "trsvcid": "35314" 00:28:39.400 }, 00:28:39.400 "auth": { 00:28:39.400 "state": "completed", 00:28:39.400 "digest": "sha512", 00:28:39.400 "dhgroup": "ffdhe8192" 00:28:39.400 } 00:28:39.400 } 00:28:39.400 ]' 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:39.400 02:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:39.658 02:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:28:41.030 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:41.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:41.030 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:41.030 02:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.030 02:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.030 02:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.030 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:41.030 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:41.030 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.289 02:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.224 00:28:42.224 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:42.224 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:42.224 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:42.483 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.483 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:42.483 02:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.483 02:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:42.483 02:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.483 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:42.483 { 00:28:42.483 "cntlid": 139, 00:28:42.483 "qid": 0, 00:28:42.483 "state": "enabled", 00:28:42.483 "thread": "nvmf_tgt_poll_group_000", 00:28:42.483 "listen_address": { 00:28:42.483 "trtype": "TCP", 00:28:42.483 "adrfam": "IPv4", 00:28:42.483 "traddr": "10.0.0.2", 00:28:42.483 "trsvcid": "4420" 00:28:42.483 }, 00:28:42.483 "peer_address": { 00:28:42.483 "trtype": "TCP", 00:28:42.483 "adrfam": "IPv4", 00:28:42.483 "traddr": "10.0.0.1", 00:28:42.483 "trsvcid": "35344" 00:28:42.483 }, 00:28:42.483 "auth": { 00:28:42.483 "state": "completed", 00:28:42.483 "digest": "sha512", 00:28:42.483 "dhgroup": "ffdhe8192" 00:28:42.483 } 00:28:42.483 } 00:28:42.483 ]' 00:28:42.483 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:42.742 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:42.742 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:42.742 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:42.742 02:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:42.742 02:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:42.742 02:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:42.742 02:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:43.000 02:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:OTEyNTllNzNmYzdhNjRjMGNmY2YwM2ViOGMzMWE1MDZw2Nbs: --dhchap-ctrl-secret DHHC-1:02:YzdmYmRiN2E4M2NiMDZkZDY3ZGI0NTVkOGM0ZjA4YTA0YmFmMDgxZTgyYjZmYjlmAkH4TA==: 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:44.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.373 02:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.306 00:28:45.306 02:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:45.306 02:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:45.306 02:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:45.872 02:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.872 02:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:45.872 { 00:28:45.872 "cntlid": 141, 00:28:45.872 "qid": 0, 00:28:45.872 "state": "enabled", 00:28:45.872 "thread": "nvmf_tgt_poll_group_000", 00:28:45.872 "listen_address": { 00:28:45.872 "trtype": "TCP", 00:28:45.872 "adrfam": "IPv4", 00:28:45.872 "traddr": "10.0.0.2", 00:28:45.872 "trsvcid": "4420" 00:28:45.872 }, 00:28:45.872 "peer_address": { 00:28:45.872 "trtype": "TCP", 00:28:45.872 "adrfam": "IPv4", 00:28:45.872 "traddr": "10.0.0.1", 00:28:45.872 "trsvcid": "35380" 00:28:45.872 }, 00:28:45.872 "auth": { 00:28:45.872 "state": "completed", 00:28:45.872 "digest": "sha512", 00:28:45.872 "dhgroup": "ffdhe8192" 00:28:45.872 } 00:28:45.872 } 00:28:45.872 ]' 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:45.872 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:46.130 02:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:MGExOTg5NjMzODNiNTBkYmE5MGMwMWVkZTJiZWRkYWQ4NTgwYzk4YjZiNTEyMjIzDfLwSA==: --dhchap-ctrl-secret DHHC-1:01:MmM3N2QyNWJiMmYzMjAzMTIxYjE5NDAyYjMwOTQwYWRxrbTT: 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:47.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:47.506 02:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:48.880 00:28:48.880 02:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:48.880 02:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:48.880 02:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:48.880 { 00:28:48.880 "cntlid": 143, 00:28:48.880 "qid": 0, 00:28:48.880 "state": "enabled", 00:28:48.880 "thread": "nvmf_tgt_poll_group_000", 00:28:48.880 "listen_address": { 00:28:48.880 "trtype": "TCP", 00:28:48.880 "adrfam": "IPv4", 00:28:48.880 "traddr": "10.0.0.2", 00:28:48.880 "trsvcid": "4420" 00:28:48.880 }, 00:28:48.880 "peer_address": { 00:28:48.880 "trtype": "TCP", 00:28:48.880 "adrfam": "IPv4", 00:28:48.880 "traddr": "10.0.0.1", 00:28:48.880 "trsvcid": "46742" 00:28:48.880 }, 00:28:48.880 "auth": { 00:28:48.880 "state": "completed", 00:28:48.880 "digest": "sha512", 00:28:48.880 "dhgroup": "ffdhe8192" 00:28:48.880 } 00:28:48.880 } 00:28:48.880 ]' 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:48.880 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:49.138 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:49.138 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:49.138 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:49.396 02:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:50.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:50.800 02:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:50.800 02:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:28:50.800 02:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:50.800 02:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:50.800 02:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:50.800 02:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:50.800 02:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:50.801 02:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.801 02:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.801 02:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.801 02:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.801 02:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.801 02:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.736 00:28:51.736 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:51.736 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:51.736 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:52.302 { 00:28:52.302 "cntlid": 145, 00:28:52.302 "qid": 0, 00:28:52.302 "state": "enabled", 00:28:52.302 "thread": "nvmf_tgt_poll_group_000", 00:28:52.302 "listen_address": { 00:28:52.302 "trtype": "TCP", 00:28:52.302 "adrfam": "IPv4", 00:28:52.302 "traddr": "10.0.0.2", 00:28:52.302 "trsvcid": "4420" 00:28:52.302 }, 00:28:52.302 "peer_address": { 00:28:52.302 "trtype": "TCP", 00:28:52.302 "adrfam": "IPv4", 00:28:52.302 "traddr": "10.0.0.1", 00:28:52.302 "trsvcid": "46752" 00:28:52.302 }, 00:28:52.302 "auth": { 00:28:52.302 "state": "completed", 00:28:52.302 "digest": "sha512", 00:28:52.302 "dhgroup": "ffdhe8192" 00:28:52.302 } 00:28:52.302 } 00:28:52.302 ]' 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:52.302 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:52.560 02:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:ZGE3NzU4MWJjOTY1YTllNGEyYzkxOTI2NjQ2MWZhNTZlYjQwNjU4NmFiYzg4NDdkFoZ5qw==: --dhchap-ctrl-secret DHHC-1:03:MWNiODM2ZDRmMjk4NTc5NzBlNWQ2NWM0MTRmYzUxMDQwNzhlOTk1MTA2M2E3YjQzMGEzM2U2NTRjZDRmYzEzMOafaCo=: 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:53.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:53.932 02:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:28:54.864 request: 00:28:54.864 { 00:28:54.864 "name": "nvme0", 00:28:54.864 "trtype": "tcp", 00:28:54.864 "traddr": "10.0.0.2", 00:28:54.864 "adrfam": "ipv4", 00:28:54.864 "trsvcid": "4420", 00:28:54.864 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:28:54.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:28:54.864 "prchk_reftag": false, 00:28:54.864 "prchk_guard": false, 00:28:54.864 "hdgst": false, 00:28:54.864 "ddgst": false, 00:28:54.864 "dhchap_key": "key2", 00:28:54.864 "method": "bdev_nvme_attach_controller", 00:28:54.864 "req_id": 1 00:28:54.864 } 00:28:54.864 Got JSON-RPC error response 00:28:54.864 response: 00:28:54.864 { 00:28:54.864 "code": -5, 00:28:54.864 "message": "Input/output error" 00:28:54.864 } 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:54.864 02:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:55.798 request: 00:28:55.798 { 00:28:55.798 "name": "nvme0", 00:28:55.798 "trtype": "tcp", 00:28:55.798 "traddr": "10.0.0.2", 00:28:55.798 "adrfam": "ipv4", 00:28:55.798 "trsvcid": "4420", 00:28:55.798 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:28:55.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:28:55.798 "prchk_reftag": false, 00:28:55.798 "prchk_guard": false, 00:28:55.798 "hdgst": false, 00:28:55.798 "ddgst": false, 00:28:55.798 "dhchap_key": "key1", 00:28:55.798 "dhchap_ctrlr_key": "ckey2", 00:28:55.798 "method": "bdev_nvme_attach_controller", 00:28:55.798 "req_id": 1 00:28:55.798 } 00:28:55.798 Got JSON-RPC error response 00:28:55.798 response: 00:28:55.798 { 00:28:55.798 "code": -5, 00:28:55.798 "message": "Input/output error" 00:28:55.798 } 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.798 02:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:56.733 request: 00:28:56.733 { 00:28:56.733 "name": "nvme0", 00:28:56.733 "trtype": "tcp", 00:28:56.733 "traddr": "10.0.0.2", 00:28:56.733 "adrfam": "ipv4", 00:28:56.733 "trsvcid": "4420", 00:28:56.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:28:56.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:28:56.733 "prchk_reftag": false, 00:28:56.733 "prchk_guard": false, 00:28:56.733 "hdgst": false, 00:28:56.733 "ddgst": false, 00:28:56.733 "dhchap_key": "key1", 00:28:56.733 "dhchap_ctrlr_key": "ckey1", 00:28:56.733 "method": "bdev_nvme_attach_controller", 00:28:56.733 "req_id": 1 00:28:56.733 } 00:28:56.733 Got JSON-RPC error response 00:28:56.733 response: 00:28:56.733 { 00:28:56.733 "code": -5, 00:28:56.733 "message": "Input/output error" 00:28:56.733 } 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1858507 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1858507 ']' 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1858507 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1858507 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1858507' 00:28:56.733 killing process with pid 1858507 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1858507 00:28:56.733 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1858507 00:28:56.991 02:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:28:56.991 02:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1880871 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1880871 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1880871 ']' 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.992 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1880871 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1880871 ']' 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.251 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.817 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.817 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:28:57.817 02:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:28:57.817 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.817 02:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:57.817 02:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:58.750 00:28:58.750 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:58.750 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:58.750 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:59.009 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.009 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:59.009 02:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.009 02:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.009 02:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.009 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:59.009 { 00:28:59.009 "cntlid": 1, 00:28:59.009 "qid": 0, 00:28:59.009 "state": "enabled", 00:28:59.009 "thread": "nvmf_tgt_poll_group_000", 00:28:59.009 "listen_address": { 00:28:59.009 "trtype": "TCP", 00:28:59.009 "adrfam": "IPv4", 00:28:59.009 "traddr": "10.0.0.2", 00:28:59.009 "trsvcid": "4420" 00:28:59.009 }, 00:28:59.009 "peer_address": { 00:28:59.009 "trtype": "TCP", 00:28:59.009 "adrfam": "IPv4", 00:28:59.009 "traddr": "10.0.0.1", 00:28:59.009 "trsvcid": "57628" 00:28:59.009 }, 00:28:59.009 "auth": { 00:28:59.009 "state": "completed", 00:28:59.009 "digest": "sha512", 00:28:59.009 "dhgroup": "ffdhe8192" 00:28:59.009 } 00:28:59.009 } 00:28:59.009 ]' 00:28:59.009 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:59.267 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:59.267 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:59.267 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:59.267 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:59.267 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:59.267 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:59.267 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:59.525 02:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:YWRlYzI4ODUxZWE0ZTBlNTA0MmU5MDViYTQyYTViZGE4ODkwNjU0ZmU4YmYyMjAwZjgxYzdlYzJkYWM5OTI3OUbGX4w=: 00:29:00.898 02:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:00.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:00.898 02:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:00.898 02:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.898 02:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.898 02:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.898 02:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:29:00.898 02:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.898 02:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.898 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.898 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:29:00.898 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:29:00.898 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:00.898 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:29:00.898 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:00.898 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:29:00.899 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:00.899 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:29:00.899 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:00.899 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:00.899 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:01.464 request: 00:29:01.464 { 00:29:01.464 "name": "nvme0", 00:29:01.464 "trtype": "tcp", 00:29:01.464 "traddr": "10.0.0.2", 00:29:01.464 "adrfam": "ipv4", 00:29:01.464 "trsvcid": "4420", 00:29:01.464 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:01.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:29:01.464 "prchk_reftag": false, 00:29:01.464 "prchk_guard": false, 00:29:01.464 "hdgst": false, 00:29:01.464 "ddgst": false, 00:29:01.464 "dhchap_key": "key3", 00:29:01.464 "method": "bdev_nvme_attach_controller", 00:29:01.464 "req_id": 1 00:29:01.464 } 00:29:01.464 Got JSON-RPC error response 00:29:01.464 response: 00:29:01.464 { 00:29:01.464 "code": -5, 00:29:01.464 "message": "Input/output error" 00:29:01.464 } 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:01.464 02:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:01.722 request: 00:29:01.722 { 00:29:01.722 "name": "nvme0", 00:29:01.722 "trtype": "tcp", 00:29:01.722 "traddr": "10.0.0.2", 00:29:01.722 "adrfam": "ipv4", 00:29:01.722 "trsvcid": "4420", 00:29:01.722 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:01.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:29:01.722 "prchk_reftag": false, 00:29:01.722 "prchk_guard": false, 00:29:01.722 "hdgst": false, 00:29:01.722 "ddgst": false, 00:29:01.722 "dhchap_key": "key3", 00:29:01.722 "method": "bdev_nvme_attach_controller", 00:29:01.722 "req_id": 1 00:29:01.722 } 00:29:01.722 Got JSON-RPC error response 00:29:01.722 response: 00:29:01.722 { 00:29:01.722 "code": -5, 00:29:01.722 "message": "Input/output error" 00:29:01.722 } 00:29:01.722 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:29:01.722 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:01.722 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:01.722 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:01.722 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:29:01.722 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:29:01.980 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:29:01.980 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:01.980 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:01.980 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:01.980 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:01.980 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.980 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:02.237 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:02.496 request: 00:29:02.496 { 00:29:02.496 "name": "nvme0", 00:29:02.496 "trtype": "tcp", 00:29:02.496 "traddr": "10.0.0.2", 00:29:02.496 "adrfam": "ipv4", 00:29:02.496 "trsvcid": "4420", 00:29:02.496 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:02.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:29:02.496 "prchk_reftag": false, 00:29:02.496 "prchk_guard": false, 00:29:02.496 "hdgst": false, 00:29:02.496 "ddgst": false, 00:29:02.496 "dhchap_key": "key0", 00:29:02.496 "dhchap_ctrlr_key": "key1", 00:29:02.496 "method": "bdev_nvme_attach_controller", 00:29:02.496 "req_id": 1 00:29:02.496 } 00:29:02.496 Got JSON-RPC error response 00:29:02.496 response: 00:29:02.496 { 00:29:02.496 "code": -5, 00:29:02.496 "message": "Input/output error" 00:29:02.496 } 00:29:02.496 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:29:02.496 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:02.496 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:02.496 02:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:02.496 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:02.496 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:02.754 00:29:02.754 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:29:02.754 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:29:02.754 02:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:03.013 02:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.013 02:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:03.013 02:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1858598 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1858598 ']' 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1858598 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1858598 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1858598' 00:29:03.283 killing process with pid 1858598 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1858598 00:29:03.283 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1858598 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:03.547 rmmod nvme_tcp 00:29:03.547 rmmod nvme_fabrics 00:29:03.547 rmmod nvme_keyring 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1880871 ']' 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1880871 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1880871 ']' 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1880871 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1880871 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1880871' 00:29:03.547 killing process with pid 1880871 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1880871 00:29:03.547 02:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1880871 00:29:03.805 02:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:03.805 02:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:03.805 02:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:03.805 02:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:03.805 02:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:03.805 02:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.805 02:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:03.805 02:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.710 02:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:05.710 02:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.27k /tmp/spdk.key-sha256.bO1 /tmp/spdk.key-sha384.Joi /tmp/spdk.key-sha512.50N /tmp/spdk.key-sha512.jKJ /tmp/spdk.key-sha384.XSE /tmp/spdk.key-sha256.Qdr '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:29:05.710 00:29:05.710 real 3m38.358s 00:29:05.710 user 8m28.252s 00:29:05.710 sys 0m26.183s 00:29:05.710 02:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:05.710 02:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.710 ************************************ 00:29:05.710 END TEST nvmf_auth_target 00:29:05.710 ************************************ 00:29:05.710 02:34:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:05.710 02:34:56 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:29:05.710 02:34:56 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:29:05.710 02:34:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:29:05.710 02:34:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.710 02:34:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.969 ************************************ 00:29:05.969 START TEST nvmf_bdevio_no_huge 00:29:05.969 ************************************ 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:29:05.969 * Looking for test storage... 00:29:05.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:29:05.969 02:34:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:07.870 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.870 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:29:07.870 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:07.870 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:07.871 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:07.871 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:07.871 Found net devices under 0000:08:00.0: cvl_0_0 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:07.871 Found net devices under 0000:08:00.1: cvl_0_1 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:07.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:29:07.871 00:29:07.871 --- 10.0.0.2 ping statistics --- 00:29:07.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.871 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:29:07.871 00:29:07.871 --- 10.0.0.1 ping statistics --- 00:29:07.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.871 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:07.871 02:34:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1883008 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1883008 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1883008 ']' 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:07.871 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:07.871 [2024-07-11 02:34:58.059982] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:07.872 [2024-07-11 02:34:58.060077] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:29:07.872 [2024-07-11 02:34:58.130009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.872 [2024-07-11 02:34:58.217043] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.872 [2024-07-11 02:34:58.217105] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.872 [2024-07-11 02:34:58.217129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.872 [2024-07-11 02:34:58.217149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.872 [2024-07-11 02:34:58.217169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.872 [2024-07-11 02:34:58.217244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:07.872 [2024-07-11 02:34:58.217301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:07.872 [2024-07-11 02:34:58.217356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:07.872 [2024-07-11 02:34:58.217364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:08.130 [2024-07-11 02:34:58.340971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:08.130 Malloc0 00:29:08.130 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:08.131 [2024-07-11 02:34:58.379607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:08.131 { 00:29:08.131 "params": { 00:29:08.131 "name": "Nvme$subsystem", 00:29:08.131 "trtype": "$TEST_TRANSPORT", 00:29:08.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.131 "adrfam": "ipv4", 00:29:08.131 "trsvcid": "$NVMF_PORT", 00:29:08.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.131 "hdgst": ${hdgst:-false}, 00:29:08.131 "ddgst": ${ddgst:-false} 00:29:08.131 }, 00:29:08.131 "method": "bdev_nvme_attach_controller" 00:29:08.131 } 00:29:08.131 EOF 00:29:08.131 )") 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:29:08.131 02:34:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:08.131 "params": { 00:29:08.131 "name": "Nvme1", 00:29:08.131 "trtype": "tcp", 00:29:08.131 "traddr": "10.0.0.2", 00:29:08.131 "adrfam": "ipv4", 00:29:08.131 "trsvcid": "4420", 00:29:08.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:08.131 "hdgst": false, 00:29:08.131 "ddgst": false 00:29:08.131 }, 00:29:08.131 "method": "bdev_nvme_attach_controller" 00:29:08.131 }' 00:29:08.131 [2024-07-11 02:34:58.428299] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:08.131 [2024-07-11 02:34:58.428393] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1883040 ] 00:29:08.131 [2024-07-11 02:34:58.489250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:08.389 [2024-07-11 02:34:58.578941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.389 [2024-07-11 02:34:58.579020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.389 [2024-07-11 02:34:58.579024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.389 I/O targets: 00:29:08.389 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:08.389 00:29:08.389 00:29:08.389 CUnit - A unit testing framework for C - Version 2.1-3 00:29:08.389 http://cunit.sourceforge.net/ 00:29:08.389 00:29:08.389 00:29:08.389 Suite: bdevio tests on: Nvme1n1 00:29:08.647 Test: blockdev write read block ...passed 00:29:08.647 Test: blockdev write zeroes read block ...passed 00:29:08.647 Test: blockdev write zeroes read no split ...passed 00:29:08.647 Test: blockdev write zeroes read split ...passed 00:29:08.647 Test: blockdev write zeroes read split partial ...passed 00:29:08.647 Test: blockdev reset ...[2024-07-11 02:34:58.978120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.647 [2024-07-11 02:34:58.978250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf5f10 (9): Bad file descriptor 00:29:08.647 [2024-07-11 02:34:58.993408] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:08.647 passed 00:29:08.647 Test: blockdev write read 8 blocks ...passed 00:29:08.647 Test: blockdev write read size > 128k ...passed 00:29:08.647 Test: blockdev write read invalid size ...passed 00:29:08.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:08.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:08.904 Test: blockdev write read max offset ...passed 00:29:08.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:08.904 Test: blockdev writev readv 8 blocks ...passed 00:29:08.904 Test: blockdev writev readv 30 x 1block ...passed 00:29:08.904 Test: blockdev writev readv block ...passed 00:29:08.904 Test: blockdev writev readv size > 128k ...passed 00:29:08.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:08.904 Test: blockdev comparev and writev ...[2024-07-11 02:34:59.207138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:08.904 [2024-07-11 02:34:59.207179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.904 [2024-07-11 02:34:59.207206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:08.904 [2024-07-11 02:34:59.207223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:08.904 [2024-07-11 02:34:59.207568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:08.904 [2024-07-11 02:34:59.207602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:08.904 [2024-07-11 02:34:59.207630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:08.904 [2024-07-11 02:34:59.207647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:08.904 [2024-07-11 02:34:59.207978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:08.904 [2024-07-11 02:34:59.208003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:08.904 [2024-07-11 02:34:59.208027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:08.904 [2024-07-11 02:34:59.208045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:08.904 [2024-07-11 02:34:59.208371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:08.904 [2024-07-11 02:34:59.208395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:08.904 [2024-07-11 02:34:59.208419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:08.904 [2024-07-11 02:34:59.208436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:08.905 passed 00:29:08.905 Test: blockdev nvme passthru rw ...passed 00:29:08.905 Test: blockdev nvme passthru vendor specific ...[2024-07-11 02:34:59.291783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:08.905 [2024-07-11 02:34:59.291812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:08.905 [2024-07-11 02:34:59.291971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:08.905 [2024-07-11 02:34:59.291995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:08.905 [2024-07-11 02:34:59.292145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:08.905 [2024-07-11 02:34:59.292169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:08.905 [2024-07-11 02:34:59.292329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:08.905 [2024-07-11 02:34:59.292353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:08.905 passed 00:29:08.905 Test: blockdev nvme admin passthru ...passed 00:29:09.162 Test: blockdev copy ...passed 00:29:09.162 00:29:09.162 Run Summary: Type Total Ran Passed Failed Inactive 00:29:09.162 suites 1 1 n/a 0 0 00:29:09.162 tests 23 23 23 0 0 00:29:09.162 asserts 152 152 152 0 n/a 00:29:09.162 00:29:09.162 Elapsed time = 1.150 seconds 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.420 rmmod nvme_tcp 00:29:09.420 rmmod nvme_fabrics 00:29:09.420 rmmod nvme_keyring 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1883008 ']' 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1883008 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1883008 ']' 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1883008 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1883008 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1883008' 00:29:09.420 killing process with pid 1883008 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1883008 00:29:09.420 02:34:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1883008 00:29:09.679 02:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:09.679 02:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:09.679 02:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:09.679 02:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:09.679 02:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:09.679 02:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.679 02:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.679 02:35:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.232 02:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.232 00:29:12.232 real 0m5.988s 00:29:12.232 user 0m9.866s 00:29:12.232 sys 0m2.211s 00:29:12.232 02:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.232 02:35:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:29:12.232 ************************************ 00:29:12.232 END TEST nvmf_bdevio_no_huge 00:29:12.232 ************************************ 00:29:12.232 02:35:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:12.232 02:35:02 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:29:12.232 02:35:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:12.232 02:35:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.232 02:35:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.232 ************************************ 00:29:12.232 START TEST nvmf_tls 00:29:12.232 ************************************ 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:29:12.232 * Looking for test storage... 00:29:12.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:29:12.232 02:35:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:13.611 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:13.611 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:13.611 Found net devices under 0000:08:00.0: cvl_0_0 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:13.611 Found net devices under 0000:08:00.1: cvl_0_1 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.611 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.612 02:35:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.612 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:13.612 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:13.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:29:13.920 00:29:13.920 --- 10.0.0.2 ping statistics --- 00:29:13.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.920 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:29:13.920 00:29:13.920 --- 10.0.0.1 ping statistics --- 00:29:13.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.920 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1884641 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1884641 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1884641 ']' 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.920 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:13.920 [2024-07-11 02:35:04.138205] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:13.920 [2024-07-11 02:35:04.138296] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.920 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.920 [2024-07-11 02:35:04.204545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.920 [2024-07-11 02:35:04.290772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.920 [2024-07-11 02:35:04.290833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.920 [2024-07-11 02:35:04.290849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.920 [2024-07-11 02:35:04.290863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.920 [2024-07-11 02:35:04.290875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.920 [2024-07-11 02:35:04.290904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.177 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:14.177 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:14.177 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:14.177 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:14.177 02:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:14.177 02:35:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.177 02:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:29:14.177 02:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:29:14.434 true 00:29:14.434 02:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:29:14.434 02:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:29:14.690 02:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:29:14.690 02:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:29:14.690 02:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:29:14.947 02:35:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:29:14.947 02:35:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:29:15.205 02:35:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:29:15.205 02:35:05 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:29:15.205 02:35:05 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:29:15.770 02:35:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:29:15.770 02:35:05 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:29:16.028 02:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:29:16.028 02:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:29:16.028 02:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:29:16.028 02:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:29:16.285 02:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:29:16.285 02:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:29:16.285 02:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:29:16.543 02:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:29:16.543 02:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:29:16.801 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:29:16.801 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:29:16.801 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:29:17.060 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:29:17.060 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:29:17.318 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.0PZxMMumEb 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.9prv834Pao 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.0PZxMMumEb 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9prv834Pao 00:29:17.576 02:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:29:17.834 02:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:29:18.093 02:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.0PZxMMumEb 00:29:18.093 02:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0PZxMMumEb 00:29:18.093 02:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:18.350 [2024-07-11 02:35:08.739557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.350 02:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:18.916 02:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:18.916 [2024-07-11 02:35:09.333186] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:18.916 [2024-07-11 02:35:09.333426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.174 02:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:19.432 malloc0 00:29:19.432 02:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:19.690 02:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0PZxMMumEb 00:29:19.949 [2024-07-11 02:35:10.230136] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:19.949 02:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.0PZxMMumEb 00:29:19.949 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.147 Initializing NVMe Controllers 00:29:32.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:32.148 Initialization complete. Launching workers. 00:29:32.148 ======================================================== 00:29:32.148 Latency(us) 00:29:32.148 Device Information : IOPS MiB/s Average min max 00:29:32.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7459.79 29.14 8582.25 1185.86 9615.15 00:29:32.148 ======================================================== 00:29:32.148 Total : 7459.79 29.14 8582.25 1185.86 9615.15 00:29:32.148 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0PZxMMumEb 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0PZxMMumEb' 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1886179 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1886179 /var/tmp/bdevperf.sock 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1886179 ']' 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:32.148 [2024-07-11 02:35:20.418618] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:32.148 [2024-07-11 02:35:20.418721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886179 ] 00:29:32.148 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.148 [2024-07-11 02:35:20.478988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.148 [2024-07-11 02:35:20.566599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:32.148 02:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0PZxMMumEb 00:29:32.148 [2024-07-11 02:35:20.945842] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:32.148 [2024-07-11 02:35:20.945965] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:32.148 TLSTESTn1 00:29:32.148 02:35:21 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:29:32.148 Running I/O for 10 seconds... 00:29:42.113 00:29:42.113 Latency(us) 00:29:42.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.113 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:42.113 Verification LBA range: start 0x0 length 0x2000 00:29:42.113 TLSTESTn1 : 10.03 3258.15 12.73 0.00 0.00 39197.91 7961.41 34564.17 00:29:42.113 =================================================================================================================== 00:29:42.113 Total : 3258.15 12.73 0.00 0.00 39197.91 7961.41 34564.17 00:29:42.113 0 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1886179 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1886179 ']' 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1886179 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1886179 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1886179' 00:29:42.114 killing process with pid 1886179 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1886179 00:29:42.114 Received shutdown signal, test time was about 10.000000 seconds 00:29:42.114 00:29:42.114 Latency(us) 00:29:42.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.114 =================================================================================================================== 00:29:42.114 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.114 [2024-07-11 02:35:31.251317] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1886179 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9prv834Pao 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9prv834Pao 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9prv834Pao 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9prv834Pao' 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1887167 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1887167 /var/tmp/bdevperf.sock 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1887167 ']' 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:42.114 [2024-07-11 02:35:31.468435] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:42.114 [2024-07-11 02:35:31.468542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887167 ] 00:29:42.114 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.114 [2024-07-11 02:35:31.528356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.114 [2024-07-11 02:35:31.615911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:42.114 02:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9prv834Pao 00:29:42.114 [2024-07-11 02:35:31.994767] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:42.114 [2024-07-11 02:35:31.994880] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:42.114 [2024-07-11 02:35:32.000487] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:42.114 [2024-07-11 02:35:32.000992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x929920 (107): Transport endpoint is not connected 00:29:42.114 [2024-07-11 02:35:32.001979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x929920 (9): Bad file descriptor 00:29:42.114 [2024-07-11 02:35:32.002985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.114 [2024-07-11 02:35:32.003007] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:29:42.114 [2024-07-11 02:35:32.003027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.114 request: 00:29:42.114 { 00:29:42.114 "name": "TLSTEST", 00:29:42.114 "trtype": "tcp", 00:29:42.114 "traddr": "10.0.0.2", 00:29:42.114 "adrfam": "ipv4", 00:29:42.114 "trsvcid": "4420", 00:29:42.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:42.114 "prchk_reftag": false, 00:29:42.114 "prchk_guard": false, 00:29:42.114 "hdgst": false, 00:29:42.114 "ddgst": false, 00:29:42.114 "psk": "/tmp/tmp.9prv834Pao", 00:29:42.114 "method": "bdev_nvme_attach_controller", 00:29:42.114 "req_id": 1 00:29:42.114 } 00:29:42.114 Got JSON-RPC error response 00:29:42.114 response: 00:29:42.114 { 00:29:42.114 "code": -5, 00:29:42.114 "message": "Input/output error" 00:29:42.114 } 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1887167 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1887167 ']' 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1887167 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887167 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887167' 00:29:42.114 killing process with pid 1887167 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1887167 00:29:42.114 Received shutdown signal, test time was about 10.000000 seconds 00:29:42.114 00:29:42.114 Latency(us) 00:29:42.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.114 =================================================================================================================== 00:29:42.114 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:42.114 [2024-07-11 02:35:32.048567] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1887167 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0PZxMMumEb 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0PZxMMumEb 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0PZxMMumEb 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0PZxMMumEb' 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1887275 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1887275 /var/tmp/bdevperf.sock 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1887275 ']' 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.114 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:42.114 [2024-07-11 02:35:32.250692] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:42.115 [2024-07-11 02:35:32.250791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887275 ] 00:29:42.115 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.115 [2024-07-11 02:35:32.311193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.115 [2024-07-11 02:35:32.402219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.115 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:42.115 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:42.115 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.0PZxMMumEb 00:29:42.372 [2024-07-11 02:35:32.778601] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:42.372 [2024-07-11 02:35:32.778719] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:42.372 [2024-07-11 02:35:32.784245] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:29:42.372 [2024-07-11 02:35:32.784278] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:29:42.372 [2024-07-11 02:35:32.784321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:42.372 [2024-07-11 02:35:32.784862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103a920 (107): Transport endpoint is not connected 00:29:42.372 [2024-07-11 02:35:32.785858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103a920 (9): Bad file descriptor 00:29:42.373 [2024-07-11 02:35:32.786857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.373 [2024-07-11 02:35:32.786888] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:29:42.373 [2024-07-11 02:35:32.786908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.373 request: 00:29:42.373 { 00:29:42.373 "name": "TLSTEST", 00:29:42.373 "trtype": "tcp", 00:29:42.373 "traddr": "10.0.0.2", 00:29:42.373 "adrfam": "ipv4", 00:29:42.373 "trsvcid": "4420", 00:29:42.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.373 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:42.373 "prchk_reftag": false, 00:29:42.373 "prchk_guard": false, 00:29:42.373 "hdgst": false, 00:29:42.373 "ddgst": false, 00:29:42.373 "psk": "/tmp/tmp.0PZxMMumEb", 00:29:42.373 "method": "bdev_nvme_attach_controller", 00:29:42.373 "req_id": 1 00:29:42.373 } 00:29:42.373 Got JSON-RPC error response 00:29:42.373 response: 00:29:42.373 { 00:29:42.373 "code": -5, 00:29:42.373 "message": "Input/output error" 00:29:42.373 } 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1887275 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1887275 ']' 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1887275 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887275 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887275' 00:29:42.631 killing process with pid 1887275 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1887275 00:29:42.631 Received shutdown signal, test time was about 10.000000 seconds 00:29:42.631 00:29:42.631 Latency(us) 00:29:42.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.631 =================================================================================================================== 00:29:42.631 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:42.631 [2024-07-11 02:35:32.836782] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1887275 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0PZxMMumEb 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0PZxMMumEb 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0PZxMMumEb 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0PZxMMumEb' 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1887294 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1887294 /var/tmp/bdevperf.sock 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1887294 ']' 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.631 02:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:42.631 [2024-07-11 02:35:33.043727] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:42.631 [2024-07-11 02:35:33.043829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887294 ] 00:29:42.889 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.889 [2024-07-11 02:35:33.105724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.889 [2024-07-11 02:35:33.197259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.889 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:42.889 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:42.889 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0PZxMMumEb 00:29:43.453 [2024-07-11 02:35:33.569806] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:43.453 [2024-07-11 02:35:33.569937] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:43.453 [2024-07-11 02:35:33.575544] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:29:43.453 [2024-07-11 02:35:33.575578] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:29:43.453 [2024-07-11 02:35:33.575621] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:43.453 [2024-07-11 02:35:33.576144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359920 (107): Transport endpoint is not connected 00:29:43.454 [2024-07-11 02:35:33.577132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359920 (9): Bad file descriptor 00:29:43.454 [2024-07-11 02:35:33.578147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:43.454 [2024-07-11 02:35:33.578169] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:29:43.454 [2024-07-11 02:35:33.578190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:43.454 request: 00:29:43.454 { 00:29:43.454 "name": "TLSTEST", 00:29:43.454 "trtype": "tcp", 00:29:43.454 "traddr": "10.0.0.2", 00:29:43.454 "adrfam": "ipv4", 00:29:43.454 "trsvcid": "4420", 00:29:43.454 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:43.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:43.454 "prchk_reftag": false, 00:29:43.454 "prchk_guard": false, 00:29:43.454 "hdgst": false, 00:29:43.454 "ddgst": false, 00:29:43.454 "psk": "/tmp/tmp.0PZxMMumEb", 00:29:43.454 "method": "bdev_nvme_attach_controller", 00:29:43.454 "req_id": 1 00:29:43.454 } 00:29:43.454 Got JSON-RPC error response 00:29:43.454 response: 00:29:43.454 { 00:29:43.454 "code": -5, 00:29:43.454 "message": "Input/output error" 00:29:43.454 } 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1887294 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1887294 ']' 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1887294 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887294 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887294' 00:29:43.454 killing process with pid 1887294 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1887294 00:29:43.454 Received shutdown signal, test time was about 10.000000 seconds 00:29:43.454 00:29:43.454 Latency(us) 00:29:43.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.454 =================================================================================================================== 00:29:43.454 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:43.454 [2024-07-11 02:35:33.628542] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1887294 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1887400 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1887400 /var/tmp/bdevperf.sock 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1887400 ']' 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:43.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:43.454 02:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:43.454 [2024-07-11 02:35:33.833800] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:43.454 [2024-07-11 02:35:33.833906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887400 ] 00:29:43.454 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.711 [2024-07-11 02:35:33.894425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.711 [2024-07-11 02:35:33.985310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.711 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:43.711 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:43.711 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:29:43.968 [2024-07-11 02:35:34.376127] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:43.968 [2024-07-11 02:35:34.378148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x740a00 (9): Bad file descriptor 00:29:43.968 [2024-07-11 02:35:34.379144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.968 [2024-07-11 02:35:34.379174] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:29:43.968 [2024-07-11 02:35:34.379222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.968 request: 00:29:43.968 { 00:29:43.968 "name": "TLSTEST", 00:29:43.968 "trtype": "tcp", 00:29:43.968 "traddr": "10.0.0.2", 00:29:43.968 "adrfam": "ipv4", 00:29:43.968 "trsvcid": "4420", 00:29:43.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:43.968 "prchk_reftag": false, 00:29:43.968 "prchk_guard": false, 00:29:43.968 "hdgst": false, 00:29:43.968 "ddgst": false, 00:29:43.969 "method": "bdev_nvme_attach_controller", 00:29:43.969 "req_id": 1 00:29:43.969 } 00:29:43.969 Got JSON-RPC error response 00:29:43.969 response: 00:29:43.969 { 00:29:43.969 "code": -5, 00:29:43.969 "message": "Input/output error" 00:29:43.969 } 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1887400 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1887400 ']' 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1887400 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887400 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887400' 00:29:44.228 killing process with pid 1887400 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1887400 00:29:44.228 Received shutdown signal, test time was about 10.000000 seconds 00:29:44.228 00:29:44.228 Latency(us) 00:29:44.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.228 =================================================================================================================== 00:29:44.228 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1887400 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1884641 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1884641 ']' 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1884641 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884641 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884641' 00:29:44.228 killing process with pid 1884641 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1884641 00:29:44.228 [2024-07-11 02:35:34.613844] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:44.228 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1884641 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.p0GMWYk5Zt 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.p0GMWYk5Zt 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1887517 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1887517 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1887517 ']' 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:44.486 02:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:44.486 [2024-07-11 02:35:34.896445] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:44.486 [2024-07-11 02:35:34.896544] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.745 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.745 [2024-07-11 02:35:34.960274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.745 [2024-07-11 02:35:35.046088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.745 [2024-07-11 02:35:35.046149] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.745 [2024-07-11 02:35:35.046165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.745 [2024-07-11 02:35:35.046178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.745 [2024-07-11 02:35:35.046190] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.745 [2024-07-11 02:35:35.046220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.745 02:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:44.745 02:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:44.745 02:35:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:44.745 02:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:44.745 02:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:44.745 02:35:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.745 02:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.p0GMWYk5Zt 00:29:44.745 02:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p0GMWYk5Zt 00:29:44.745 02:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:45.337 [2024-07-11 02:35:35.440423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.337 02:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:45.595 02:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:45.854 [2024-07-11 02:35:36.037995] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:45.854 [2024-07-11 02:35:36.038215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.854 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:29:46.112 malloc0 00:29:46.112 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:46.371 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p0GMWYk5Zt 00:29:46.630 [2024-07-11 02:35:36.938326] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p0GMWYk5Zt 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p0GMWYk5Zt' 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1887739 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1887739 /var/tmp/bdevperf.sock 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1887739 ']' 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:46.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:46.630 02:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:46.630 [2024-07-11 02:35:37.007333] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:46.630 [2024-07-11 02:35:37.007433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887739 ] 00:29:46.630 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.889 [2024-07-11 02:35:37.068293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.889 [2024-07-11 02:35:37.155795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.889 02:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:46.889 02:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:46.889 02:35:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p0GMWYk5Zt 00:29:47.148 [2024-07-11 02:35:37.538898] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:47.148 [2024-07-11 02:35:37.539017] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:47.408 TLSTESTn1 00:29:47.408 02:35:37 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:29:47.408 Running I/O for 10 seconds... 00:29:57.390 00:29:57.390 Latency(us) 00:29:57.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.390 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:57.390 Verification LBA range: start 0x0 length 0x2000 00:29:57.390 TLSTESTn1 : 10.02 3264.41 12.75 0.00 0.00 39135.95 7330.32 36505.98 00:29:57.390 =================================================================================================================== 00:29:57.390 Total : 3264.41 12.75 0.00 0.00 39135.95 7330.32 36505.98 00:29:57.390 0 00:29:57.390 02:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.390 02:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1887739 00:29:57.390 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1887739 ']' 00:29:57.390 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1887739 00:29:57.390 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:57.648 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:57.648 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887739 00:29:57.648 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:57.648 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:57.648 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887739' 00:29:57.649 killing process with pid 1887739 00:29:57.649 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1887739 00:29:57.649 Received shutdown signal, test time was about 10.000000 seconds 00:29:57.649 00:29:57.649 Latency(us) 00:29:57.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.649 =================================================================================================================== 00:29:57.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.649 [2024-07-11 02:35:47.834313] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:57.649 02:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1887739 00:29:57.649 02:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.p0GMWYk5Zt 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p0GMWYk5Zt 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p0GMWYk5Zt 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p0GMWYk5Zt 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p0GMWYk5Zt' 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1888729 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1888729 /var/tmp/bdevperf.sock 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1888729 ']' 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:57.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.649 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:57.649 [2024-07-11 02:35:48.055693] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:57.649 [2024-07-11 02:35:48.055791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888729 ] 00:29:57.907 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.907 [2024-07-11 02:35:48.116956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.907 [2024-07-11 02:35:48.204703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.907 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:57.907 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:57.907 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p0GMWYk5Zt 00:29:58.165 [2024-07-11 02:35:48.579602] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:58.165 [2024-07-11 02:35:48.579676] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:29:58.165 [2024-07-11 02:35:48.579693] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.p0GMWYk5Zt 00:29:58.165 request: 00:29:58.165 { 00:29:58.165 "name": "TLSTEST", 00:29:58.165 "trtype": "tcp", 00:29:58.165 "traddr": "10.0.0.2", 00:29:58.165 "adrfam": "ipv4", 00:29:58.165 "trsvcid": "4420", 00:29:58.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.165 "prchk_reftag": false, 00:29:58.165 "prchk_guard": false, 00:29:58.165 "hdgst": false, 00:29:58.165 "ddgst": false, 00:29:58.165 "psk": "/tmp/tmp.p0GMWYk5Zt", 00:29:58.165 "method": "bdev_nvme_attach_controller", 00:29:58.165 "req_id": 1 00:29:58.165 } 00:29:58.165 Got JSON-RPC error response 00:29:58.165 response: 00:29:58.165 { 00:29:58.165 "code": -1, 00:29:58.165 "message": "Operation not permitted" 00:29:58.165 } 00:29:58.423 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1888729 00:29:58.423 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1888729 ']' 00:29:58.423 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1888729 00:29:58.423 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1888729 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1888729' 00:29:58.424 killing process with pid 1888729 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1888729 00:29:58.424 Received shutdown signal, test time was about 10.000000 seconds 00:29:58.424 00:29:58.424 Latency(us) 00:29:58.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.424 =================================================================================================================== 00:29:58.424 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1888729 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1887517 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1887517 ']' 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1887517 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887517 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887517' 00:29:58.424 killing process with pid 1887517 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1887517 00:29:58.424 [2024-07-11 02:35:48.797191] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:58.424 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1887517 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1888843 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1888843 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1888843 ']' 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:58.682 02:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:58.682 [2024-07-11 02:35:49.030127] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:29:58.682 [2024-07-11 02:35:49.030222] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.682 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.682 [2024-07-11 02:35:49.094092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.940 [2024-07-11 02:35:49.180290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.940 [2024-07-11 02:35:49.180351] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.940 [2024-07-11 02:35:49.180368] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.940 [2024-07-11 02:35:49.180381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.940 [2024-07-11 02:35:49.180393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.940 [2024-07-11 02:35:49.180430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.p0GMWYk5Zt 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.p0GMWYk5Zt 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.p0GMWYk5Zt 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p0GMWYk5Zt 00:29:58.940 02:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:59.198 [2024-07-11 02:35:49.587194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.198 02:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:29:59.764 02:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:29:59.764 [2024-07-11 02:35:50.176782] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:59.764 [2024-07-11 02:35:50.177026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.022 02:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:30:00.279 malloc0 00:30:00.279 02:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:00.537 02:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p0GMWYk5Zt 00:30:00.795 [2024-07-11 02:35:51.073077] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:30:00.795 [2024-07-11 02:35:51.073121] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:30:00.795 [2024-07-11 02:35:51.073163] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:30:00.795 request: 00:30:00.795 { 00:30:00.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.795 "host": "nqn.2016-06.io.spdk:host1", 00:30:00.795 "psk": "/tmp/tmp.p0GMWYk5Zt", 00:30:00.795 "method": "nvmf_subsystem_add_host", 00:30:00.795 "req_id": 1 00:30:00.795 } 00:30:00.795 Got JSON-RPC error response 00:30:00.795 response: 00:30:00.795 { 00:30:00.795 "code": -32603, 00:30:00.795 "message": "Internal error" 00:30:00.795 } 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1888843 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1888843 ']' 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1888843 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1888843 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:00.795 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:00.796 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1888843' 00:30:00.796 killing process with pid 1888843 00:30:00.796 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1888843 00:30:00.796 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1888843 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.p0GMWYk5Zt 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1889075 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1889075 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1889075 ']' 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:01.054 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.054 [2024-07-11 02:35:51.352258] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:01.054 [2024-07-11 02:35:51.352351] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.054 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.054 [2024-07-11 02:35:51.415658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.312 [2024-07-11 02:35:51.501902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.312 [2024-07-11 02:35:51.501965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.312 [2024-07-11 02:35:51.501982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.312 [2024-07-11 02:35:51.501996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.312 [2024-07-11 02:35:51.502009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.312 [2024-07-11 02:35:51.502052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.312 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:01.312 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:01.312 02:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:01.312 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:01.312 02:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:01.312 02:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.312 02:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.p0GMWYk5Zt 00:30:01.312 02:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p0GMWYk5Zt 00:30:01.312 02:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:01.571 [2024-07-11 02:35:51.892645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.571 02:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:30:01.830 02:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:30:02.088 [2024-07-11 02:35:52.389962] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:02.088 [2024-07-11 02:35:52.390180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.088 02:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:30:02.346 malloc0 00:30:02.346 02:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:02.604 02:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p0GMWYk5Zt 00:30:02.861 [2024-07-11 02:35:53.122343] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:02.861 02:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1889280 00:30:02.861 02:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:02.861 02:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:02.861 02:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1889280 /var/tmp/bdevperf.sock 00:30:02.861 02:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1889280 ']' 00:30:02.861 02:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:02.862 02:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:02.862 02:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:02.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:02.862 02:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:02.862 02:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:02.862 [2024-07-11 02:35:53.186268] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:02.862 [2024-07-11 02:35:53.186361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889280 ] 00:30:02.862 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.862 [2024-07-11 02:35:53.242270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.119 [2024-07-11 02:35:53.333911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.119 02:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:03.119 02:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:03.119 02:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p0GMWYk5Zt 00:30:03.379 [2024-07-11 02:35:53.660624] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:03.379 [2024-07-11 02:35:53.660749] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:03.379 TLSTESTn1 00:30:03.379 02:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:30:03.640 02:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:30:03.640 "subsystems": [ 00:30:03.640 { 00:30:03.640 "subsystem": "keyring", 00:30:03.640 "config": [] 00:30:03.640 }, 00:30:03.640 { 00:30:03.640 "subsystem": "iobuf", 00:30:03.640 "config": [ 00:30:03.640 { 00:30:03.640 "method": "iobuf_set_options", 00:30:03.640 "params": { 00:30:03.640 "small_pool_count": 8192, 00:30:03.640 "large_pool_count": 1024, 00:30:03.640 "small_bufsize": 8192, 00:30:03.640 "large_bufsize": 135168 00:30:03.640 } 00:30:03.640 } 00:30:03.640 ] 00:30:03.640 }, 00:30:03.640 { 00:30:03.640 "subsystem": "sock", 00:30:03.640 "config": [ 00:30:03.640 { 00:30:03.640 "method": "sock_set_default_impl", 00:30:03.640 "params": { 00:30:03.640 "impl_name": "posix" 00:30:03.640 } 00:30:03.640 }, 00:30:03.640 { 00:30:03.640 "method": "sock_impl_set_options", 00:30:03.640 "params": { 00:30:03.640 "impl_name": "ssl", 00:30:03.640 "recv_buf_size": 4096, 00:30:03.640 "send_buf_size": 4096, 00:30:03.640 "enable_recv_pipe": true, 00:30:03.640 "enable_quickack": false, 00:30:03.640 "enable_placement_id": 0, 00:30:03.640 "enable_zerocopy_send_server": true, 00:30:03.640 "enable_zerocopy_send_client": false, 00:30:03.640 "zerocopy_threshold": 0, 00:30:03.640 "tls_version": 0, 00:30:03.640 "enable_ktls": false 00:30:03.640 } 00:30:03.640 }, 00:30:03.640 { 00:30:03.640 "method": "sock_impl_set_options", 00:30:03.640 "params": { 00:30:03.640 "impl_name": "posix", 00:30:03.640 "recv_buf_size": 2097152, 00:30:03.640 "send_buf_size": 2097152, 00:30:03.640 "enable_recv_pipe": true, 00:30:03.640 "enable_quickack": false, 00:30:03.640 "enable_placement_id": 0, 00:30:03.640 "enable_zerocopy_send_server": true, 00:30:03.640 "enable_zerocopy_send_client": false, 00:30:03.641 "zerocopy_threshold": 0, 00:30:03.641 "tls_version": 0, 00:30:03.641 "enable_ktls": false 00:30:03.641 } 00:30:03.641 } 00:30:03.641 ] 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "subsystem": "vmd", 00:30:03.641 "config": [] 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "subsystem": "accel", 00:30:03.641 "config": [ 00:30:03.641 { 00:30:03.641 "method": "accel_set_options", 00:30:03.641 "params": { 00:30:03.641 "small_cache_size": 128, 00:30:03.641 "large_cache_size": 16, 00:30:03.641 "task_count": 2048, 00:30:03.641 "sequence_count": 2048, 00:30:03.641 "buf_count": 2048 00:30:03.641 } 00:30:03.641 } 00:30:03.641 ] 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "subsystem": "bdev", 00:30:03.641 "config": [ 00:30:03.641 { 00:30:03.641 "method": "bdev_set_options", 00:30:03.641 "params": { 00:30:03.641 "bdev_io_pool_size": 65535, 00:30:03.641 "bdev_io_cache_size": 256, 00:30:03.641 "bdev_auto_examine": true, 00:30:03.641 "iobuf_small_cache_size": 128, 00:30:03.641 "iobuf_large_cache_size": 16 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "bdev_raid_set_options", 00:30:03.641 "params": { 00:30:03.641 "process_window_size_kb": 1024 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "bdev_iscsi_set_options", 00:30:03.641 "params": { 00:30:03.641 "timeout_sec": 30 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "bdev_nvme_set_options", 00:30:03.641 "params": { 00:30:03.641 "action_on_timeout": "none", 00:30:03.641 "timeout_us": 0, 00:30:03.641 "timeout_admin_us": 0, 00:30:03.641 "keep_alive_timeout_ms": 10000, 00:30:03.641 "arbitration_burst": 0, 00:30:03.641 "low_priority_weight": 0, 00:30:03.641 "medium_priority_weight": 0, 00:30:03.641 "high_priority_weight": 0, 00:30:03.641 "nvme_adminq_poll_period_us": 10000, 00:30:03.641 "nvme_ioq_poll_period_us": 0, 00:30:03.641 "io_queue_requests": 0, 00:30:03.641 "delay_cmd_submit": true, 00:30:03.641 "transport_retry_count": 4, 00:30:03.641 "bdev_retry_count": 3, 00:30:03.641 "transport_ack_timeout": 0, 00:30:03.641 "ctrlr_loss_timeout_sec": 0, 00:30:03.641 "reconnect_delay_sec": 0, 00:30:03.641 "fast_io_fail_timeout_sec": 0, 00:30:03.641 "disable_auto_failback": false, 00:30:03.641 "generate_uuids": false, 00:30:03.641 "transport_tos": 0, 00:30:03.641 "nvme_error_stat": false, 00:30:03.641 "rdma_srq_size": 0, 00:30:03.641 "io_path_stat": false, 00:30:03.641 "allow_accel_sequence": false, 00:30:03.641 "rdma_max_cq_size": 0, 00:30:03.641 "rdma_cm_event_timeout_ms": 0, 00:30:03.641 "dhchap_digests": [ 00:30:03.641 "sha256", 00:30:03.641 "sha384", 00:30:03.641 "sha512" 00:30:03.641 ], 00:30:03.641 "dhchap_dhgroups": [ 00:30:03.641 "null", 00:30:03.641 "ffdhe2048", 00:30:03.641 "ffdhe3072", 00:30:03.641 "ffdhe4096", 00:30:03.641 "ffdhe6144", 00:30:03.641 "ffdhe8192" 00:30:03.641 ] 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "bdev_nvme_set_hotplug", 00:30:03.641 "params": { 00:30:03.641 "period_us": 100000, 00:30:03.641 "enable": false 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "bdev_malloc_create", 00:30:03.641 "params": { 00:30:03.641 "name": "malloc0", 00:30:03.641 "num_blocks": 8192, 00:30:03.641 "block_size": 4096, 00:30:03.641 "physical_block_size": 4096, 00:30:03.641 "uuid": "dd64e70b-5240-4db3-882c-0f4632b7f7c8", 00:30:03.641 "optimal_io_boundary": 0 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "bdev_wait_for_examine" 00:30:03.641 } 00:30:03.641 ] 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "subsystem": "nbd", 00:30:03.641 "config": [] 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "subsystem": "scheduler", 00:30:03.641 "config": [ 00:30:03.641 { 00:30:03.641 "method": "framework_set_scheduler", 00:30:03.641 "params": { 00:30:03.641 "name": "static" 00:30:03.641 } 00:30:03.641 } 00:30:03.641 ] 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "subsystem": "nvmf", 00:30:03.641 "config": [ 00:30:03.641 { 00:30:03.641 "method": "nvmf_set_config", 00:30:03.641 "params": { 00:30:03.641 "discovery_filter": "match_any", 00:30:03.641 "admin_cmd_passthru": { 00:30:03.641 "identify_ctrlr": false 00:30:03.641 } 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "nvmf_set_max_subsystems", 00:30:03.641 "params": { 00:30:03.641 "max_subsystems": 1024 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "nvmf_set_crdt", 00:30:03.641 "params": { 00:30:03.641 "crdt1": 0, 00:30:03.641 "crdt2": 0, 00:30:03.641 "crdt3": 0 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "nvmf_create_transport", 00:30:03.641 "params": { 00:30:03.641 "trtype": "TCP", 00:30:03.641 "max_queue_depth": 128, 00:30:03.641 "max_io_qpairs_per_ctrlr": 127, 00:30:03.641 "in_capsule_data_size": 4096, 00:30:03.641 "max_io_size": 131072, 00:30:03.641 "io_unit_size": 131072, 00:30:03.641 "max_aq_depth": 128, 00:30:03.641 "num_shared_buffers": 511, 00:30:03.641 "buf_cache_size": 4294967295, 00:30:03.641 "dif_insert_or_strip": false, 00:30:03.641 "zcopy": false, 00:30:03.641 "c2h_success": false, 00:30:03.641 "sock_priority": 0, 00:30:03.641 "abort_timeout_sec": 1, 00:30:03.641 "ack_timeout": 0, 00:30:03.641 "data_wr_pool_size": 0 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "nvmf_create_subsystem", 00:30:03.641 "params": { 00:30:03.641 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.641 "allow_any_host": false, 00:30:03.641 "serial_number": "SPDK00000000000001", 00:30:03.641 "model_number": "SPDK bdev Controller", 00:30:03.641 "max_namespaces": 10, 00:30:03.641 "min_cntlid": 1, 00:30:03.641 "max_cntlid": 65519, 00:30:03.641 "ana_reporting": false 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "nvmf_subsystem_add_host", 00:30:03.641 "params": { 00:30:03.641 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.641 "host": "nqn.2016-06.io.spdk:host1", 00:30:03.641 "psk": "/tmp/tmp.p0GMWYk5Zt" 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "nvmf_subsystem_add_ns", 00:30:03.641 "params": { 00:30:03.641 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.641 "namespace": { 00:30:03.641 "nsid": 1, 00:30:03.641 "bdev_name": "malloc0", 00:30:03.641 "nguid": "DD64E70B52404DB3882C0F4632B7F7C8", 00:30:03.641 "uuid": "dd64e70b-5240-4db3-882c-0f4632b7f7c8", 00:30:03.641 "no_auto_visible": false 00:30:03.641 } 00:30:03.641 } 00:30:03.641 }, 00:30:03.641 { 00:30:03.641 "method": "nvmf_subsystem_add_listener", 00:30:03.641 "params": { 00:30:03.641 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.641 "listen_address": { 00:30:03.641 "trtype": "TCP", 00:30:03.641 "adrfam": "IPv4", 00:30:03.641 "traddr": "10.0.0.2", 00:30:03.641 "trsvcid": "4420" 00:30:03.641 }, 00:30:03.641 "secure_channel": true 00:30:03.641 } 00:30:03.641 } 00:30:03.641 ] 00:30:03.641 } 00:30:03.641 ] 00:30:03.641 }' 00:30:03.641 02:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:30:04.206 02:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:30:04.206 "subsystems": [ 00:30:04.206 { 00:30:04.206 "subsystem": "keyring", 00:30:04.206 "config": [] 00:30:04.206 }, 00:30:04.206 { 00:30:04.206 "subsystem": "iobuf", 00:30:04.206 "config": [ 00:30:04.206 { 00:30:04.206 "method": "iobuf_set_options", 00:30:04.206 "params": { 00:30:04.206 "small_pool_count": 8192, 00:30:04.206 "large_pool_count": 1024, 00:30:04.206 "small_bufsize": 8192, 00:30:04.206 "large_bufsize": 135168 00:30:04.206 } 00:30:04.206 } 00:30:04.206 ] 00:30:04.206 }, 00:30:04.206 { 00:30:04.206 "subsystem": "sock", 00:30:04.206 "config": [ 00:30:04.206 { 00:30:04.206 "method": "sock_set_default_impl", 00:30:04.206 "params": { 00:30:04.206 "impl_name": "posix" 00:30:04.206 } 00:30:04.206 }, 00:30:04.206 { 00:30:04.206 "method": "sock_impl_set_options", 00:30:04.206 "params": { 00:30:04.206 "impl_name": "ssl", 00:30:04.206 "recv_buf_size": 4096, 00:30:04.206 "send_buf_size": 4096, 00:30:04.206 "enable_recv_pipe": true, 00:30:04.206 "enable_quickack": false, 00:30:04.206 "enable_placement_id": 0, 00:30:04.206 "enable_zerocopy_send_server": true, 00:30:04.206 "enable_zerocopy_send_client": false, 00:30:04.206 "zerocopy_threshold": 0, 00:30:04.206 "tls_version": 0, 00:30:04.206 "enable_ktls": false 00:30:04.206 } 00:30:04.206 }, 00:30:04.206 { 00:30:04.206 "method": "sock_impl_set_options", 00:30:04.206 "params": { 00:30:04.206 "impl_name": "posix", 00:30:04.206 "recv_buf_size": 2097152, 00:30:04.206 "send_buf_size": 2097152, 00:30:04.206 "enable_recv_pipe": true, 00:30:04.206 "enable_quickack": false, 00:30:04.206 "enable_placement_id": 0, 00:30:04.207 "enable_zerocopy_send_server": true, 00:30:04.207 "enable_zerocopy_send_client": false, 00:30:04.207 "zerocopy_threshold": 0, 00:30:04.207 "tls_version": 0, 00:30:04.207 "enable_ktls": false 00:30:04.207 } 00:30:04.207 } 00:30:04.207 ] 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "subsystem": "vmd", 00:30:04.207 "config": [] 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "subsystem": "accel", 00:30:04.207 "config": [ 00:30:04.207 { 00:30:04.207 "method": "accel_set_options", 00:30:04.207 "params": { 00:30:04.207 "small_cache_size": 128, 00:30:04.207 "large_cache_size": 16, 00:30:04.207 "task_count": 2048, 00:30:04.207 "sequence_count": 2048, 00:30:04.207 "buf_count": 2048 00:30:04.207 } 00:30:04.207 } 00:30:04.207 ] 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "subsystem": "bdev", 00:30:04.207 "config": [ 00:30:04.207 { 00:30:04.207 "method": "bdev_set_options", 00:30:04.207 "params": { 00:30:04.207 "bdev_io_pool_size": 65535, 00:30:04.207 "bdev_io_cache_size": 256, 00:30:04.207 "bdev_auto_examine": true, 00:30:04.207 "iobuf_small_cache_size": 128, 00:30:04.207 "iobuf_large_cache_size": 16 00:30:04.207 } 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "method": "bdev_raid_set_options", 00:30:04.207 "params": { 00:30:04.207 "process_window_size_kb": 1024 00:30:04.207 } 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "method": "bdev_iscsi_set_options", 00:30:04.207 "params": { 00:30:04.207 "timeout_sec": 30 00:30:04.207 } 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "method": "bdev_nvme_set_options", 00:30:04.207 "params": { 00:30:04.207 "action_on_timeout": "none", 00:30:04.207 "timeout_us": 0, 00:30:04.207 "timeout_admin_us": 0, 00:30:04.207 "keep_alive_timeout_ms": 10000, 00:30:04.207 "arbitration_burst": 0, 00:30:04.207 "low_priority_weight": 0, 00:30:04.207 "medium_priority_weight": 0, 00:30:04.207 "high_priority_weight": 0, 00:30:04.207 "nvme_adminq_poll_period_us": 10000, 00:30:04.207 "nvme_ioq_poll_period_us": 0, 00:30:04.207 "io_queue_requests": 512, 00:30:04.207 "delay_cmd_submit": true, 00:30:04.207 "transport_retry_count": 4, 00:30:04.207 "bdev_retry_count": 3, 00:30:04.207 "transport_ack_timeout": 0, 00:30:04.207 "ctrlr_loss_timeout_sec": 0, 00:30:04.207 "reconnect_delay_sec": 0, 00:30:04.207 "fast_io_fail_timeout_sec": 0, 00:30:04.207 "disable_auto_failback": false, 00:30:04.207 "generate_uuids": false, 00:30:04.207 "transport_tos": 0, 00:30:04.207 "nvme_error_stat": false, 00:30:04.207 "rdma_srq_size": 0, 00:30:04.207 "io_path_stat": false, 00:30:04.207 "allow_accel_sequence": false, 00:30:04.207 "rdma_max_cq_size": 0, 00:30:04.207 "rdma_cm_event_timeout_ms": 0, 00:30:04.207 "dhchap_digests": [ 00:30:04.207 "sha256", 00:30:04.207 "sha384", 00:30:04.207 "sha512" 00:30:04.207 ], 00:30:04.207 "dhchap_dhgroups": [ 00:30:04.207 "null", 00:30:04.207 "ffdhe2048", 00:30:04.207 "ffdhe3072", 00:30:04.207 "ffdhe4096", 00:30:04.207 "ffdhe6144", 00:30:04.207 "ffdhe8192" 00:30:04.207 ] 00:30:04.207 } 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "method": "bdev_nvme_attach_controller", 00:30:04.207 "params": { 00:30:04.207 "name": "TLSTEST", 00:30:04.207 "trtype": "TCP", 00:30:04.207 "adrfam": "IPv4", 00:30:04.207 "traddr": "10.0.0.2", 00:30:04.207 "trsvcid": "4420", 00:30:04.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.207 "prchk_reftag": false, 00:30:04.207 "prchk_guard": false, 00:30:04.207 "ctrlr_loss_timeout_sec": 0, 00:30:04.207 "reconnect_delay_sec": 0, 00:30:04.207 "fast_io_fail_timeout_sec": 0, 00:30:04.207 "psk": "/tmp/tmp.p0GMWYk5Zt", 00:30:04.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:04.207 "hdgst": false, 00:30:04.207 "ddgst": false 00:30:04.207 } 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "method": "bdev_nvme_set_hotplug", 00:30:04.207 "params": { 00:30:04.207 "period_us": 100000, 00:30:04.207 "enable": false 00:30:04.207 } 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "method": "bdev_wait_for_examine" 00:30:04.207 } 00:30:04.207 ] 00:30:04.207 }, 00:30:04.207 { 00:30:04.207 "subsystem": "nbd", 00:30:04.207 "config": [] 00:30:04.207 } 00:30:04.207 ] 00:30:04.207 }' 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1889280 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1889280 ']' 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1889280 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889280 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889280' 00:30:04.207 killing process with pid 1889280 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1889280 00:30:04.207 Received shutdown signal, test time was about 10.000000 seconds 00:30:04.207 00:30:04.207 Latency(us) 00:30:04.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.207 =================================================================================================================== 00:30:04.207 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:04.207 [2024-07-11 02:35:54.398550] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1889280 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1889075 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1889075 ']' 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1889075 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889075 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889075' 00:30:04.207 killing process with pid 1889075 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1889075 00:30:04.207 [2024-07-11 02:35:54.592024] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:04.207 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1889075 00:30:04.466 02:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:30:04.466 02:35:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:04.466 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:04.466 02:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:30:04.466 "subsystems": [ 00:30:04.466 { 00:30:04.466 "subsystem": "keyring", 00:30:04.466 "config": [] 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "subsystem": "iobuf", 00:30:04.466 "config": [ 00:30:04.466 { 00:30:04.466 "method": "iobuf_set_options", 00:30:04.466 "params": { 00:30:04.466 "small_pool_count": 8192, 00:30:04.466 "large_pool_count": 1024, 00:30:04.466 "small_bufsize": 8192, 00:30:04.466 "large_bufsize": 135168 00:30:04.466 } 00:30:04.466 } 00:30:04.466 ] 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "subsystem": "sock", 00:30:04.466 "config": [ 00:30:04.466 { 00:30:04.466 "method": "sock_set_default_impl", 00:30:04.466 "params": { 00:30:04.466 "impl_name": "posix" 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "sock_impl_set_options", 00:30:04.466 "params": { 00:30:04.466 "impl_name": "ssl", 00:30:04.466 "recv_buf_size": 4096, 00:30:04.466 "send_buf_size": 4096, 00:30:04.466 "enable_recv_pipe": true, 00:30:04.466 "enable_quickack": false, 00:30:04.466 "enable_placement_id": 0, 00:30:04.466 "enable_zerocopy_send_server": true, 00:30:04.466 "enable_zerocopy_send_client": false, 00:30:04.466 "zerocopy_threshold": 0, 00:30:04.466 "tls_version": 0, 00:30:04.466 "enable_ktls": false 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "sock_impl_set_options", 00:30:04.466 "params": { 00:30:04.466 "impl_name": "posix", 00:30:04.466 "recv_buf_size": 2097152, 00:30:04.466 "send_buf_size": 2097152, 00:30:04.466 "enable_recv_pipe": true, 00:30:04.466 "enable_quickack": false, 00:30:04.466 "enable_placement_id": 0, 00:30:04.466 "enable_zerocopy_send_server": true, 00:30:04.466 "enable_zerocopy_send_client": false, 00:30:04.466 "zerocopy_threshold": 0, 00:30:04.466 "tls_version": 0, 00:30:04.466 "enable_ktls": false 00:30:04.466 } 00:30:04.466 } 00:30:04.466 ] 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "subsystem": "vmd", 00:30:04.466 "config": [] 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "subsystem": "accel", 00:30:04.466 "config": [ 00:30:04.466 { 00:30:04.466 "method": "accel_set_options", 00:30:04.466 "params": { 00:30:04.466 "small_cache_size": 128, 00:30:04.466 "large_cache_size": 16, 00:30:04.466 "task_count": 2048, 00:30:04.466 "sequence_count": 2048, 00:30:04.466 "buf_count": 2048 00:30:04.466 } 00:30:04.466 } 00:30:04.466 ] 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "subsystem": "bdev", 00:30:04.466 "config": [ 00:30:04.466 { 00:30:04.466 "method": "bdev_set_options", 00:30:04.466 "params": { 00:30:04.466 "bdev_io_pool_size": 65535, 00:30:04.466 "bdev_io_cache_size": 256, 00:30:04.466 "bdev_auto_examine": true, 00:30:04.466 "iobuf_small_cache_size": 128, 00:30:04.466 "iobuf_large_cache_size": 16 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "bdev_raid_set_options", 00:30:04.466 "params": { 00:30:04.466 "process_window_size_kb": 1024 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "bdev_iscsi_set_options", 00:30:04.466 "params": { 00:30:04.466 "timeout_sec": 30 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "bdev_nvme_set_options", 00:30:04.466 "params": { 00:30:04.466 "action_on_timeout": "none", 00:30:04.466 "timeout_us": 0, 00:30:04.466 "timeout_admin_us": 0, 00:30:04.466 "keep_alive_timeout_ms": 10000, 00:30:04.466 "arbitration_burst": 0, 00:30:04.466 "low_priority_weight": 0, 00:30:04.466 "medium_priority_weight": 0, 00:30:04.466 "high_priority_weight": 0, 00:30:04.466 "nvme_adminq_poll_period_us": 10000, 00:30:04.466 "nvme_ioq_poll_period_us": 0, 00:30:04.466 "io_queue_requests": 0, 00:30:04.466 "delay_cmd_submit": true, 00:30:04.466 "transport_retry_count": 4, 00:30:04.466 "bdev_retry_count": 3, 00:30:04.466 "transport_ack_timeout": 0, 00:30:04.466 "ctrlr_loss_timeout_sec": 0, 00:30:04.466 "reconnect_delay_sec": 0, 00:30:04.466 "fast_io_fail_timeout_sec": 0, 00:30:04.466 "disable_auto_failback": false, 00:30:04.466 "generate_uuids": false, 00:30:04.466 "transport_tos": 0, 00:30:04.466 "nvme_error_stat": false, 00:30:04.466 "rdma_srq_size": 0, 00:30:04.466 "io_path_stat": false, 00:30:04.466 "allow_accel_sequence": false, 00:30:04.466 "rdma_max_cq_size": 0, 00:30:04.466 "rdma_cm_event_timeout_ms": 0, 00:30:04.466 "dhchap_digests": [ 00:30:04.466 "sha256", 00:30:04.466 "sha384", 00:30:04.466 "sha512" 00:30:04.466 ], 00:30:04.466 "dhchap_dhgroups": [ 00:30:04.466 "null", 00:30:04.466 "ffdhe2048", 00:30:04.466 "ffdhe3072", 00:30:04.466 "ffdhe4096", 00:30:04.466 "ffdhe6144", 00:30:04.466 "ffdhe8192" 00:30:04.466 ] 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "bdev_nvme_set_hotplug", 00:30:04.466 "params": { 00:30:04.466 "period_us": 100000, 00:30:04.466 "enable": false 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "bdev_malloc_create", 00:30:04.466 "params": { 00:30:04.466 "name": "malloc0", 00:30:04.466 "num_blocks": 8192, 00:30:04.466 "block_size": 4096, 00:30:04.466 "physical_block_size": 4096, 00:30:04.466 "uuid": "dd64e70b-5240-4db3-882c-0f4632b7f7c8", 00:30:04.466 "optimal_io_boundary": 0 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "bdev_wait_for_examine" 00:30:04.466 } 00:30:04.466 ] 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "subsystem": "nbd", 00:30:04.466 "config": [] 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "subsystem": "scheduler", 00:30:04.466 "config": [ 00:30:04.466 { 00:30:04.466 "method": "framework_set_scheduler", 00:30:04.466 "params": { 00:30:04.466 "name": "static" 00:30:04.466 } 00:30:04.466 } 00:30:04.466 ] 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "subsystem": "nvmf", 00:30:04.466 "config": [ 00:30:04.466 { 00:30:04.466 "method": "nvmf_set_config", 00:30:04.466 "params": { 00:30:04.466 "discovery_filter": "match_any", 00:30:04.466 "admin_cmd_passthru": { 00:30:04.466 "identify_ctrlr": false 00:30:04.466 } 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "nvmf_set_max_subsystems", 00:30:04.466 "params": { 00:30:04.466 "max_subsystems": 1024 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "nvmf_set_crdt", 00:30:04.466 "params": { 00:30:04.466 "crdt1": 0, 00:30:04.466 "crdt2": 0, 00:30:04.466 "crdt3": 0 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.466 "method": "nvmf_create_transport", 00:30:04.466 "params": { 00:30:04.466 "trtype": "TCP", 00:30:04.466 "max_queue_depth": 128, 00:30:04.466 "max_io_qpairs_per_ctrlr": 127, 00:30:04.466 "in_capsule_data_size": 4096, 00:30:04.466 "max_io_size": 131072, 00:30:04.466 "io_unit_size": 131072, 00:30:04.466 "max_aq_depth": 128, 00:30:04.466 "num_shared_buffers": 511, 00:30:04.466 "buf_cache_size": 4294967295, 00:30:04.466 "dif_insert_or_strip": false, 00:30:04.466 "zcopy": false, 00:30:04.466 "c2h_success": false, 00:30:04.466 "sock_priority": 0, 00:30:04.466 "abort_timeout_sec": 1, 00:30:04.466 "ack_timeout": 0, 00:30:04.466 "data_wr_pool_size": 0 00:30:04.466 } 00:30:04.466 }, 00:30:04.466 { 00:30:04.467 "method": "nvmf_create_subsystem", 00:30:04.467 "params": { 00:30:04.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.467 "allow_any_host": false, 00:30:04.467 "serial_number": "SPDK00000000000001", 00:30:04.467 "model_number": "SPDK bdev Controller", 00:30:04.467 "max_namespaces": 10, 00:30:04.467 "min_cntlid": 1, 00:30:04.467 "max_cntlid": 65519, 00:30:04.467 "ana_reporting": false 00:30:04.467 } 00:30:04.467 }, 00:30:04.467 { 00:30:04.467 "method": "nvmf_subsystem_add_host", 00:30:04.467 "params": { 00:30:04.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.467 "host": "nqn.2016-06.io.spdk:host1", 00:30:04.467 "psk": "/tmp/tmp.p0GMWYk5Zt" 00:30:04.467 } 00:30:04.467 }, 00:30:04.467 { 00:30:04.467 "method": "nvmf_subsystem_add_ns", 00:30:04.467 "params": { 00:30:04.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.467 "namespace": { 00:30:04.467 "nsid": 1, 00:30:04.467 "bdev_name": "malloc0", 00:30:04.467 "nguid": "DD64E70B52404DB3882C0F4632B7F7C8", 00:30:04.467 "uuid": "dd64e70b-5240-4db3-882c-0f4632b7f7c8", 00:30:04.467 "no_auto_visible": false 00:30:04.467 } 00:30:04.467 } 00:30:04.467 }, 00:30:04.467 { 00:30:04.467 "method": "nvmf_subsystem_add_listener", 00:30:04.467 "params": { 00:30:04.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.467 "listen_address": { 00:30:04.467 "trtype": "TCP", 00:30:04.467 "adrfam": "IPv4", 00:30:04.467 "traddr": "10.0.0.2", 00:30:04.467 "trsvcid": "4420" 00:30:04.467 }, 00:30:04.467 "secure_channel": true 00:30:04.467 } 00:30:04.467 } 00:30:04.467 ] 00:30:04.467 } 00:30:04.467 ] 00:30:04.467 }' 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1889413 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1889413 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1889413 ']' 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:04.467 02:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:04.467 [2024-07-11 02:35:54.823790] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:04.467 [2024-07-11 02:35:54.823898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.467 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.723 [2024-07-11 02:35:54.889789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.723 [2024-07-11 02:35:54.979522] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.723 [2024-07-11 02:35:54.979594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.723 [2024-07-11 02:35:54.979611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.723 [2024-07-11 02:35:54.979624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.723 [2024-07-11 02:35:54.979641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.723 [2024-07-11 02:35:54.979730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.981 [2024-07-11 02:35:55.197825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.981 [2024-07-11 02:35:55.213754] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:04.981 [2024-07-11 02:35:55.229824] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:04.981 [2024-07-11 02:35:55.245678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1889530 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1889530 /var/tmp/bdevperf.sock 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1889530 ']' 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:05.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:05.547 02:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:30:05.547 "subsystems": [ 00:30:05.547 { 00:30:05.547 "subsystem": "keyring", 00:30:05.547 "config": [] 00:30:05.547 }, 00:30:05.547 { 00:30:05.547 "subsystem": "iobuf", 00:30:05.547 "config": [ 00:30:05.547 { 00:30:05.547 "method": "iobuf_set_options", 00:30:05.547 "params": { 00:30:05.547 "small_pool_count": 8192, 00:30:05.547 "large_pool_count": 1024, 00:30:05.547 "small_bufsize": 8192, 00:30:05.547 "large_bufsize": 135168 00:30:05.547 } 00:30:05.547 } 00:30:05.547 ] 00:30:05.547 }, 00:30:05.547 { 00:30:05.547 "subsystem": "sock", 00:30:05.547 "config": [ 00:30:05.547 { 00:30:05.547 "method": "sock_set_default_impl", 00:30:05.547 "params": { 00:30:05.547 "impl_name": "posix" 00:30:05.547 } 00:30:05.547 }, 00:30:05.547 { 00:30:05.547 "method": "sock_impl_set_options", 00:30:05.547 "params": { 00:30:05.547 "impl_name": "ssl", 00:30:05.547 "recv_buf_size": 4096, 00:30:05.547 "send_buf_size": 4096, 00:30:05.547 "enable_recv_pipe": true, 00:30:05.547 "enable_quickack": false, 00:30:05.547 "enable_placement_id": 0, 00:30:05.547 "enable_zerocopy_send_server": true, 00:30:05.547 "enable_zerocopy_send_client": false, 00:30:05.547 "zerocopy_threshold": 0, 00:30:05.547 "tls_version": 0, 00:30:05.547 "enable_ktls": false 00:30:05.547 } 00:30:05.547 }, 00:30:05.547 { 00:30:05.547 "method": "sock_impl_set_options", 00:30:05.547 "params": { 00:30:05.547 "impl_name": "posix", 00:30:05.547 "recv_buf_size": 2097152, 00:30:05.547 "send_buf_size": 2097152, 00:30:05.547 "enable_recv_pipe": true, 00:30:05.548 "enable_quickack": false, 00:30:05.548 "enable_placement_id": 0, 00:30:05.548 "enable_zerocopy_send_server": true, 00:30:05.548 "enable_zerocopy_send_client": false, 00:30:05.548 "zerocopy_threshold": 0, 00:30:05.548 "tls_version": 0, 00:30:05.548 "enable_ktls": false 00:30:05.548 } 00:30:05.548 } 00:30:05.548 ] 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "subsystem": "vmd", 00:30:05.548 "config": [] 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "subsystem": "accel", 00:30:05.548 "config": [ 00:30:05.548 { 00:30:05.548 "method": "accel_set_options", 00:30:05.548 "params": { 00:30:05.548 "small_cache_size": 128, 00:30:05.548 "large_cache_size": 16, 00:30:05.548 "task_count": 2048, 00:30:05.548 "sequence_count": 2048, 00:30:05.548 "buf_count": 2048 00:30:05.548 } 00:30:05.548 } 00:30:05.548 ] 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "subsystem": "bdev", 00:30:05.548 "config": [ 00:30:05.548 { 00:30:05.548 "method": "bdev_set_options", 00:30:05.548 "params": { 00:30:05.548 "bdev_io_pool_size": 65535, 00:30:05.548 "bdev_io_cache_size": 256, 00:30:05.548 "bdev_auto_examine": true, 00:30:05.548 "iobuf_small_cache_size": 128, 00:30:05.548 "iobuf_large_cache_size": 16 00:30:05.548 } 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "method": "bdev_raid_set_options", 00:30:05.548 "params": { 00:30:05.548 "process_window_size_kb": 1024 00:30:05.548 } 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "method": "bdev_iscsi_set_options", 00:30:05.548 "params": { 00:30:05.548 "timeout_sec": 30 00:30:05.548 } 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "method": "bdev_nvme_set_options", 00:30:05.548 "params": { 00:30:05.548 "action_on_timeout": "none", 00:30:05.548 "timeout_us": 0, 00:30:05.548 "timeout_admin_us": 0, 00:30:05.548 "keep_alive_timeout_ms": 10000, 00:30:05.548 "arbitration_burst": 0, 00:30:05.548 "low_priority_weight": 0, 00:30:05.548 "medium_priority_weight": 0, 00:30:05.548 "high_priority_weight": 0, 00:30:05.548 "nvme_adminq_poll_period_us": 10000, 00:30:05.548 "nvme_ioq_poll_period_us": 0, 00:30:05.548 "io_queue_requests": 512, 00:30:05.548 "delay_cmd_submit": true, 00:30:05.548 "transport_retry_count": 4, 00:30:05.548 "bdev_retry_count": 3, 00:30:05.548 "transport_ack_timeout": 0, 00:30:05.548 "ctrlr_loss_timeout_sec": 0, 00:30:05.548 "reconnect_delay_sec": 0, 00:30:05.548 "fast_io_fail_timeout_sec": 0, 00:30:05.548 "disable_auto_failback": false, 00:30:05.548 "generate_uuids": false, 00:30:05.548 "transport_tos": 0, 00:30:05.548 "nvme_error_stat": false, 00:30:05.548 "rdma_srq_size": 0, 00:30:05.548 "io_path_stat": false, 00:30:05.548 "allow_accel_sequence": false, 00:30:05.548 "rdma_max_cq_size": 0, 00:30:05.548 "rdma_cm_event_timeout_ms": 0, 00:30:05.548 "dhchap_digests": [ 00:30:05.548 "sha256", 00:30:05.548 "sha384", 00:30:05.548 "sha512" 00:30:05.548 ], 00:30:05.548 "dhchap_dhgroups": [ 00:30:05.548 "null", 00:30:05.548 "ffdhe2048", 00:30:05.548 "ffdhe3072", 00:30:05.548 "ffdhe4096", 00:30:05.548 "ffdhe6144", 00:30:05.548 "ffdhe8192" 00:30:05.548 ] 00:30:05.548 } 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "method": "bdev_nvme_attach_controller", 00:30:05.548 "params": { 00:30:05.548 "name": "TLSTEST", 00:30:05.548 "trtype": "TCP", 00:30:05.548 "adrfam": "IPv4", 00:30:05.548 "traddr": "10.0.0.2", 00:30:05.548 "trsvcid": "4420", 00:30:05.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.548 "prchk_reftag": false, 00:30:05.548 "prchk_guard": false, 00:30:05.548 "ctrlr_loss_timeout_sec": 0, 00:30:05.548 "reconnect_delay_sec": 0, 00:30:05.548 "fast_io_fail_timeout_sec": 0, 00:30:05.548 "psk": "/tmp/tmp.p0GMWYk5Zt", 00:30:05.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.548 "hdgst": false, 00:30:05.548 "ddgst": false 00:30:05.548 } 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "method": "bdev_nvme_set_hotplug", 00:30:05.548 "params": { 00:30:05.548 "period_us": 100000, 00:30:05.548 "enable": false 00:30:05.548 } 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "method": "bdev_wait_for_examine" 00:30:05.548 } 00:30:05.548 ] 00:30:05.548 }, 00:30:05.548 { 00:30:05.548 "subsystem": "nbd", 00:30:05.548 "config": [] 00:30:05.548 } 00:30:05.548 ] 00:30:05.548 }' 00:30:05.548 [2024-07-11 02:35:55.942375] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:05.548 [2024-07-11 02:35:55.942477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889530 ] 00:30:05.807 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.807 [2024-07-11 02:35:56.002883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.807 [2024-07-11 02:35:56.094147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.065 [2024-07-11 02:35:56.252436] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:06.065 [2024-07-11 02:35:56.252582] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:06.065 02:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:06.065 02:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:06.065 02:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:30:06.065 Running I/O for 10 seconds... 00:30:18.283 00:30:18.283 Latency(us) 00:30:18.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.283 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:18.283 Verification LBA range: start 0x0 length 0x2000 00:30:18.283 TLSTESTn1 : 10.02 3141.15 12.27 0.00 0.00 40670.83 7912.87 48156.82 00:30:18.283 =================================================================================================================== 00:30:18.283 Total : 3141.15 12.27 0.00 0.00 40670.83 7912.87 48156.82 00:30:18.283 0 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1889530 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1889530 ']' 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1889530 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889530 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889530' 00:30:18.283 killing process with pid 1889530 00:30:18.283 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1889530 00:30:18.283 Received shutdown signal, test time was about 10.000000 seconds 00:30:18.283 00:30:18.283 Latency(us) 00:30:18.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.284 =================================================================================================================== 00:30:18.284 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:18.284 [2024-07-11 02:36:06.564710] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1889530 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1889413 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1889413 ']' 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1889413 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889413 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889413' 00:30:18.284 killing process with pid 1889413 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1889413 00:30:18.284 [2024-07-11 02:36:06.749101] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1889413 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1890642 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1890642 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1890642 ']' 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.284 02:36:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:18.284 [2024-07-11 02:36:06.984720] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:18.284 [2024-07-11 02:36:06.984820] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.284 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.284 [2024-07-11 02:36:07.051216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.284 [2024-07-11 02:36:07.140436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.284 [2024-07-11 02:36:07.140501] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.284 [2024-07-11 02:36:07.140525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.284 [2024-07-11 02:36:07.140539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.284 [2024-07-11 02:36:07.140552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.284 [2024-07-11 02:36:07.140591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.p0GMWYk5Zt 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.p0GMWYk5Zt 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:18.284 [2024-07-11 02:36:07.545278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:30:18.284 02:36:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:30:18.284 [2024-07-11 02:36:08.138845] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:18.284 [2024-07-11 02:36:08.139082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.284 02:36:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:30:18.284 malloc0 00:30:18.284 02:36:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:18.571 02:36:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p0GMWYk5Zt 00:30:18.831 [2024-07-11 02:36:09.059402] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1891139 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1891139 /var/tmp/bdevperf.sock 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1891139 ']' 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:18.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.832 02:36:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:18.832 [2024-07-11 02:36:09.127541] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:18.832 [2024-07-11 02:36:09.127631] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891139 ] 00:30:18.832 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.832 [2024-07-11 02:36:09.188026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.089 [2024-07-11 02:36:09.279307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.089 02:36:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.089 02:36:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:19.089 02:36:09 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p0GMWYk5Zt 00:30:19.347 02:36:09 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:30:19.605 [2024-07-11 02:36:09.954346] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:19.863 nvme0n1 00:30:19.863 02:36:10 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:19.863 Running I/O for 1 seconds... 00:30:20.797 00:30:20.797 Latency(us) 00:30:20.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.797 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:20.797 Verification LBA range: start 0x0 length 0x2000 00:30:20.797 nvme0n1 : 1.03 3204.86 12.52 0.00 0.00 39469.62 7330.32 40583.77 00:30:20.797 =================================================================================================================== 00:30:20.797 Total : 3204.86 12.52 0.00 0.00 39469.62 7330.32 40583.77 00:30:20.797 0 00:30:20.797 02:36:11 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1891139 00:30:20.797 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1891139 ']' 00:30:20.797 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1891139 00:30:20.797 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:20.797 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:20.797 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1891139 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1891139' 00:30:21.056 killing process with pid 1891139 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1891139 00:30:21.056 Received shutdown signal, test time was about 1.000000 seconds 00:30:21.056 00:30:21.056 Latency(us) 00:30:21.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.056 =================================================================================================================== 00:30:21.056 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1891139 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1890642 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1890642 ']' 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1890642 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1890642 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1890642' 00:30:21.056 killing process with pid 1890642 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1890642 00:30:21.056 [2024-07-11 02:36:11.430210] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:21.056 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1890642 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1891601 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1891601 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1891601 ']' 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:21.316 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:21.316 [2024-07-11 02:36:11.662759] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:21.316 [2024-07-11 02:36:11.662865] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.316 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.316 [2024-07-11 02:36:11.728237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.574 [2024-07-11 02:36:11.816913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.574 [2024-07-11 02:36:11.816974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.574 [2024-07-11 02:36:11.816990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.574 [2024-07-11 02:36:11.817004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.574 [2024-07-11 02:36:11.817016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.574 [2024-07-11 02:36:11.817047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.574 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:21.574 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:21.574 02:36:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:21.574 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:21.574 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:21.574 02:36:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.574 02:36:11 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:30:21.574 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:21.574 02:36:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:21.574 [2024-07-11 02:36:11.945860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.574 malloc0 00:30:21.574 [2024-07-11 02:36:11.976383] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:21.574 [2024-07-11 02:36:11.976637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1891704 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1891704 /var/tmp/bdevperf.sock 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1891704 ']' 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:21.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:21.833 02:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:21.833 [2024-07-11 02:36:12.051362] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:21.833 [2024-07-11 02:36:12.051453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891704 ] 00:30:21.833 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.833 [2024-07-11 02:36:12.112033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.833 [2024-07-11 02:36:12.199614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.091 02:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:22.091 02:36:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:22.091 02:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p0GMWYk5Zt 00:30:22.349 02:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:30:22.607 [2024-07-11 02:36:12.879274] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:22.607 nvme0n1 00:30:22.607 02:36:12 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:22.865 Running I/O for 1 seconds... 00:30:23.797 00:30:23.797 Latency(us) 00:30:23.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.797 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:23.797 Verification LBA range: start 0x0 length 0x2000 00:30:23.797 nvme0n1 : 1.02 3317.56 12.96 0.00 0.00 38164.70 7233.23 32428.18 00:30:23.797 =================================================================================================================== 00:30:23.797 Total : 3317.56 12.96 0.00 0.00 38164.70 7233.23 32428.18 00:30:23.797 0 00:30:23.797 02:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:30:23.797 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.797 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:24.055 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.055 02:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:30:24.055 "subsystems": [ 00:30:24.055 { 00:30:24.055 "subsystem": "keyring", 00:30:24.055 "config": [ 00:30:24.055 { 00:30:24.055 "method": "keyring_file_add_key", 00:30:24.055 "params": { 00:30:24.055 "name": "key0", 00:30:24.055 "path": "/tmp/tmp.p0GMWYk5Zt" 00:30:24.055 } 00:30:24.055 } 00:30:24.055 ] 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "subsystem": "iobuf", 00:30:24.055 "config": [ 00:30:24.055 { 00:30:24.055 "method": "iobuf_set_options", 00:30:24.055 "params": { 00:30:24.055 "small_pool_count": 8192, 00:30:24.055 "large_pool_count": 1024, 00:30:24.055 "small_bufsize": 8192, 00:30:24.055 "large_bufsize": 135168 00:30:24.055 } 00:30:24.055 } 00:30:24.055 ] 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "subsystem": "sock", 00:30:24.055 "config": [ 00:30:24.055 { 00:30:24.055 "method": "sock_set_default_impl", 00:30:24.055 "params": { 00:30:24.055 "impl_name": "posix" 00:30:24.055 } 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "method": "sock_impl_set_options", 00:30:24.055 "params": { 00:30:24.055 "impl_name": "ssl", 00:30:24.055 "recv_buf_size": 4096, 00:30:24.055 "send_buf_size": 4096, 00:30:24.055 "enable_recv_pipe": true, 00:30:24.055 "enable_quickack": false, 00:30:24.055 "enable_placement_id": 0, 00:30:24.055 "enable_zerocopy_send_server": true, 00:30:24.055 "enable_zerocopy_send_client": false, 00:30:24.055 "zerocopy_threshold": 0, 00:30:24.055 "tls_version": 0, 00:30:24.055 "enable_ktls": false 00:30:24.055 } 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "method": "sock_impl_set_options", 00:30:24.055 "params": { 00:30:24.055 "impl_name": "posix", 00:30:24.055 "recv_buf_size": 2097152, 00:30:24.055 "send_buf_size": 2097152, 00:30:24.055 "enable_recv_pipe": true, 00:30:24.055 "enable_quickack": false, 00:30:24.055 "enable_placement_id": 0, 00:30:24.055 "enable_zerocopy_send_server": true, 00:30:24.055 "enable_zerocopy_send_client": false, 00:30:24.055 "zerocopy_threshold": 0, 00:30:24.055 "tls_version": 0, 00:30:24.055 "enable_ktls": false 00:30:24.055 } 00:30:24.055 } 00:30:24.055 ] 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "subsystem": "vmd", 00:30:24.055 "config": [] 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "subsystem": "accel", 00:30:24.055 "config": [ 00:30:24.055 { 00:30:24.055 "method": "accel_set_options", 00:30:24.055 "params": { 00:30:24.055 "small_cache_size": 128, 00:30:24.055 "large_cache_size": 16, 00:30:24.055 "task_count": 2048, 00:30:24.055 "sequence_count": 2048, 00:30:24.055 "buf_count": 2048 00:30:24.055 } 00:30:24.055 } 00:30:24.055 ] 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "subsystem": "bdev", 00:30:24.055 "config": [ 00:30:24.055 { 00:30:24.055 "method": "bdev_set_options", 00:30:24.055 "params": { 00:30:24.055 "bdev_io_pool_size": 65535, 00:30:24.055 "bdev_io_cache_size": 256, 00:30:24.055 "bdev_auto_examine": true, 00:30:24.055 "iobuf_small_cache_size": 128, 00:30:24.055 "iobuf_large_cache_size": 16 00:30:24.055 } 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "method": "bdev_raid_set_options", 00:30:24.055 "params": { 00:30:24.055 "process_window_size_kb": 1024 00:30:24.055 } 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "method": "bdev_iscsi_set_options", 00:30:24.055 "params": { 00:30:24.055 "timeout_sec": 30 00:30:24.055 } 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "method": "bdev_nvme_set_options", 00:30:24.055 "params": { 00:30:24.055 "action_on_timeout": "none", 00:30:24.055 "timeout_us": 0, 00:30:24.055 "timeout_admin_us": 0, 00:30:24.055 "keep_alive_timeout_ms": 10000, 00:30:24.055 "arbitration_burst": 0, 00:30:24.055 "low_priority_weight": 0, 00:30:24.055 "medium_priority_weight": 0, 00:30:24.055 "high_priority_weight": 0, 00:30:24.055 "nvme_adminq_poll_period_us": 10000, 00:30:24.055 "nvme_ioq_poll_period_us": 0, 00:30:24.055 "io_queue_requests": 0, 00:30:24.055 "delay_cmd_submit": true, 00:30:24.055 "transport_retry_count": 4, 00:30:24.055 "bdev_retry_count": 3, 00:30:24.055 "transport_ack_timeout": 0, 00:30:24.055 "ctrlr_loss_timeout_sec": 0, 00:30:24.055 "reconnect_delay_sec": 0, 00:30:24.055 "fast_io_fail_timeout_sec": 0, 00:30:24.055 "disable_auto_failback": false, 00:30:24.055 "generate_uuids": false, 00:30:24.055 "transport_tos": 0, 00:30:24.055 "nvme_error_stat": false, 00:30:24.055 "rdma_srq_size": 0, 00:30:24.055 "io_path_stat": false, 00:30:24.055 "allow_accel_sequence": false, 00:30:24.055 "rdma_max_cq_size": 0, 00:30:24.055 "rdma_cm_event_timeout_ms": 0, 00:30:24.055 "dhchap_digests": [ 00:30:24.055 "sha256", 00:30:24.055 "sha384", 00:30:24.055 "sha512" 00:30:24.055 ], 00:30:24.055 "dhchap_dhgroups": [ 00:30:24.055 "null", 00:30:24.055 "ffdhe2048", 00:30:24.055 "ffdhe3072", 00:30:24.055 "ffdhe4096", 00:30:24.055 "ffdhe6144", 00:30:24.055 "ffdhe8192" 00:30:24.055 ] 00:30:24.055 } 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "method": "bdev_nvme_set_hotplug", 00:30:24.055 "params": { 00:30:24.055 "period_us": 100000, 00:30:24.055 "enable": false 00:30:24.055 } 00:30:24.055 }, 00:30:24.055 { 00:30:24.055 "method": "bdev_malloc_create", 00:30:24.055 "params": { 00:30:24.055 "name": "malloc0", 00:30:24.055 "num_blocks": 8192, 00:30:24.055 "block_size": 4096, 00:30:24.055 "physical_block_size": 4096, 00:30:24.055 "uuid": "e7c26b6f-dbc2-4248-afe8-04ff7a9f2bdc", 00:30:24.055 "optimal_io_boundary": 0 00:30:24.056 } 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "method": "bdev_wait_for_examine" 00:30:24.056 } 00:30:24.056 ] 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "subsystem": "nbd", 00:30:24.056 "config": [] 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "subsystem": "scheduler", 00:30:24.056 "config": [ 00:30:24.056 { 00:30:24.056 "method": "framework_set_scheduler", 00:30:24.056 "params": { 00:30:24.056 "name": "static" 00:30:24.056 } 00:30:24.056 } 00:30:24.056 ] 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "subsystem": "nvmf", 00:30:24.056 "config": [ 00:30:24.056 { 00:30:24.056 "method": "nvmf_set_config", 00:30:24.056 "params": { 00:30:24.056 "discovery_filter": "match_any", 00:30:24.056 "admin_cmd_passthru": { 00:30:24.056 "identify_ctrlr": false 00:30:24.056 } 00:30:24.056 } 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "method": "nvmf_set_max_subsystems", 00:30:24.056 "params": { 00:30:24.056 "max_subsystems": 1024 00:30:24.056 } 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "method": "nvmf_set_crdt", 00:30:24.056 "params": { 00:30:24.056 "crdt1": 0, 00:30:24.056 "crdt2": 0, 00:30:24.056 "crdt3": 0 00:30:24.056 } 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "method": "nvmf_create_transport", 00:30:24.056 "params": { 00:30:24.056 "trtype": "TCP", 00:30:24.056 "max_queue_depth": 128, 00:30:24.056 "max_io_qpairs_per_ctrlr": 127, 00:30:24.056 "in_capsule_data_size": 4096, 00:30:24.056 "max_io_size": 131072, 00:30:24.056 "io_unit_size": 131072, 00:30:24.056 "max_aq_depth": 128, 00:30:24.056 "num_shared_buffers": 511, 00:30:24.056 "buf_cache_size": 4294967295, 00:30:24.056 "dif_insert_or_strip": false, 00:30:24.056 "zcopy": false, 00:30:24.056 "c2h_success": false, 00:30:24.056 "sock_priority": 0, 00:30:24.056 "abort_timeout_sec": 1, 00:30:24.056 "ack_timeout": 0, 00:30:24.056 "data_wr_pool_size": 0 00:30:24.056 } 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "method": "nvmf_create_subsystem", 00:30:24.056 "params": { 00:30:24.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.056 "allow_any_host": false, 00:30:24.056 "serial_number": "00000000000000000000", 00:30:24.056 "model_number": "SPDK bdev Controller", 00:30:24.056 "max_namespaces": 32, 00:30:24.056 "min_cntlid": 1, 00:30:24.056 "max_cntlid": 65519, 00:30:24.056 "ana_reporting": false 00:30:24.056 } 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "method": "nvmf_subsystem_add_host", 00:30:24.056 "params": { 00:30:24.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.056 "host": "nqn.2016-06.io.spdk:host1", 00:30:24.056 "psk": "key0" 00:30:24.056 } 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "method": "nvmf_subsystem_add_ns", 00:30:24.056 "params": { 00:30:24.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.056 "namespace": { 00:30:24.056 "nsid": 1, 00:30:24.056 "bdev_name": "malloc0", 00:30:24.056 "nguid": "E7C26B6FDBC24248AFE804FF7A9F2BDC", 00:30:24.056 "uuid": "e7c26b6f-dbc2-4248-afe8-04ff7a9f2bdc", 00:30:24.056 "no_auto_visible": false 00:30:24.056 } 00:30:24.056 } 00:30:24.056 }, 00:30:24.056 { 00:30:24.056 "method": "nvmf_subsystem_add_listener", 00:30:24.056 "params": { 00:30:24.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.056 "listen_address": { 00:30:24.056 "trtype": "TCP", 00:30:24.056 "adrfam": "IPv4", 00:30:24.056 "traddr": "10.0.0.2", 00:30:24.056 "trsvcid": "4420" 00:30:24.056 }, 00:30:24.056 "secure_channel": true 00:30:24.056 } 00:30:24.056 } 00:30:24.056 ] 00:30:24.056 } 00:30:24.056 ] 00:30:24.056 }' 00:30:24.056 02:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:30:24.315 "subsystems": [ 00:30:24.315 { 00:30:24.315 "subsystem": "keyring", 00:30:24.315 "config": [ 00:30:24.315 { 00:30:24.315 "method": "keyring_file_add_key", 00:30:24.315 "params": { 00:30:24.315 "name": "key0", 00:30:24.315 "path": "/tmp/tmp.p0GMWYk5Zt" 00:30:24.315 } 00:30:24.315 } 00:30:24.315 ] 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "subsystem": "iobuf", 00:30:24.315 "config": [ 00:30:24.315 { 00:30:24.315 "method": "iobuf_set_options", 00:30:24.315 "params": { 00:30:24.315 "small_pool_count": 8192, 00:30:24.315 "large_pool_count": 1024, 00:30:24.315 "small_bufsize": 8192, 00:30:24.315 "large_bufsize": 135168 00:30:24.315 } 00:30:24.315 } 00:30:24.315 ] 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "subsystem": "sock", 00:30:24.315 "config": [ 00:30:24.315 { 00:30:24.315 "method": "sock_set_default_impl", 00:30:24.315 "params": { 00:30:24.315 "impl_name": "posix" 00:30:24.315 } 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "method": "sock_impl_set_options", 00:30:24.315 "params": { 00:30:24.315 "impl_name": "ssl", 00:30:24.315 "recv_buf_size": 4096, 00:30:24.315 "send_buf_size": 4096, 00:30:24.315 "enable_recv_pipe": true, 00:30:24.315 "enable_quickack": false, 00:30:24.315 "enable_placement_id": 0, 00:30:24.315 "enable_zerocopy_send_server": true, 00:30:24.315 "enable_zerocopy_send_client": false, 00:30:24.315 "zerocopy_threshold": 0, 00:30:24.315 "tls_version": 0, 00:30:24.315 "enable_ktls": false 00:30:24.315 } 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "method": "sock_impl_set_options", 00:30:24.315 "params": { 00:30:24.315 "impl_name": "posix", 00:30:24.315 "recv_buf_size": 2097152, 00:30:24.315 "send_buf_size": 2097152, 00:30:24.315 "enable_recv_pipe": true, 00:30:24.315 "enable_quickack": false, 00:30:24.315 "enable_placement_id": 0, 00:30:24.315 "enable_zerocopy_send_server": true, 00:30:24.315 "enable_zerocopy_send_client": false, 00:30:24.315 "zerocopy_threshold": 0, 00:30:24.315 "tls_version": 0, 00:30:24.315 "enable_ktls": false 00:30:24.315 } 00:30:24.315 } 00:30:24.315 ] 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "subsystem": "vmd", 00:30:24.315 "config": [] 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "subsystem": "accel", 00:30:24.315 "config": [ 00:30:24.315 { 00:30:24.315 "method": "accel_set_options", 00:30:24.315 "params": { 00:30:24.315 "small_cache_size": 128, 00:30:24.315 "large_cache_size": 16, 00:30:24.315 "task_count": 2048, 00:30:24.315 "sequence_count": 2048, 00:30:24.315 "buf_count": 2048 00:30:24.315 } 00:30:24.315 } 00:30:24.315 ] 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "subsystem": "bdev", 00:30:24.315 "config": [ 00:30:24.315 { 00:30:24.315 "method": "bdev_set_options", 00:30:24.315 "params": { 00:30:24.315 "bdev_io_pool_size": 65535, 00:30:24.315 "bdev_io_cache_size": 256, 00:30:24.315 "bdev_auto_examine": true, 00:30:24.315 "iobuf_small_cache_size": 128, 00:30:24.315 "iobuf_large_cache_size": 16 00:30:24.315 } 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "method": "bdev_raid_set_options", 00:30:24.315 "params": { 00:30:24.315 "process_window_size_kb": 1024 00:30:24.315 } 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "method": "bdev_iscsi_set_options", 00:30:24.315 "params": { 00:30:24.315 "timeout_sec": 30 00:30:24.315 } 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "method": "bdev_nvme_set_options", 00:30:24.315 "params": { 00:30:24.315 "action_on_timeout": "none", 00:30:24.315 "timeout_us": 0, 00:30:24.315 "timeout_admin_us": 0, 00:30:24.315 "keep_alive_timeout_ms": 10000, 00:30:24.315 "arbitration_burst": 0, 00:30:24.315 "low_priority_weight": 0, 00:30:24.315 "medium_priority_weight": 0, 00:30:24.315 "high_priority_weight": 0, 00:30:24.315 "nvme_adminq_poll_period_us": 10000, 00:30:24.315 "nvme_ioq_poll_period_us": 0, 00:30:24.315 "io_queue_requests": 512, 00:30:24.315 "delay_cmd_submit": true, 00:30:24.315 "transport_retry_count": 4, 00:30:24.315 "bdev_retry_count": 3, 00:30:24.315 "transport_ack_timeout": 0, 00:30:24.315 "ctrlr_loss_timeout_sec": 0, 00:30:24.315 "reconnect_delay_sec": 0, 00:30:24.315 "fast_io_fail_timeout_sec": 0, 00:30:24.315 "disable_auto_failback": false, 00:30:24.315 "generate_uuids": false, 00:30:24.315 "transport_tos": 0, 00:30:24.315 "nvme_error_stat": false, 00:30:24.315 "rdma_srq_size": 0, 00:30:24.315 "io_path_stat": false, 00:30:24.315 "allow_accel_sequence": false, 00:30:24.315 "rdma_max_cq_size": 0, 00:30:24.315 "rdma_cm_event_timeout_ms": 0, 00:30:24.315 "dhchap_digests": [ 00:30:24.315 "sha256", 00:30:24.315 "sha384", 00:30:24.315 "sha512" 00:30:24.315 ], 00:30:24.315 "dhchap_dhgroups": [ 00:30:24.315 "null", 00:30:24.315 "ffdhe2048", 00:30:24.315 "ffdhe3072", 00:30:24.315 "ffdhe4096", 00:30:24.315 "ffdhe6144", 00:30:24.315 "ffdhe8192" 00:30:24.315 ] 00:30:24.315 } 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "method": "bdev_nvme_attach_controller", 00:30:24.315 "params": { 00:30:24.315 "name": "nvme0", 00:30:24.315 "trtype": "TCP", 00:30:24.315 "adrfam": "IPv4", 00:30:24.315 "traddr": "10.0.0.2", 00:30:24.315 "trsvcid": "4420", 00:30:24.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.315 "prchk_reftag": false, 00:30:24.315 "prchk_guard": false, 00:30:24.315 "ctrlr_loss_timeout_sec": 0, 00:30:24.315 "reconnect_delay_sec": 0, 00:30:24.315 "fast_io_fail_timeout_sec": 0, 00:30:24.315 "psk": "key0", 00:30:24.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:24.315 "hdgst": false, 00:30:24.315 "ddgst": false 00:30:24.315 } 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "method": "bdev_nvme_set_hotplug", 00:30:24.315 "params": { 00:30:24.315 "period_us": 100000, 00:30:24.315 "enable": false 00:30:24.315 } 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "method": "bdev_enable_histogram", 00:30:24.315 "params": { 00:30:24.315 "name": "nvme0n1", 00:30:24.315 "enable": true 00:30:24.315 } 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "method": "bdev_wait_for_examine" 00:30:24.315 } 00:30:24.315 ] 00:30:24.315 }, 00:30:24.315 { 00:30:24.315 "subsystem": "nbd", 00:30:24.315 "config": [] 00:30:24.315 } 00:30:24.315 ] 00:30:24.315 }' 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1891704 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1891704 ']' 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1891704 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1891704 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1891704' 00:30:24.315 killing process with pid 1891704 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1891704 00:30:24.315 Received shutdown signal, test time was about 1.000000 seconds 00:30:24.315 00:30:24.315 Latency(us) 00:30:24.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.315 =================================================================================================================== 00:30:24.315 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.315 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1891704 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1891601 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1891601 ']' 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1891601 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1891601 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1891601' 00:30:24.574 killing process with pid 1891601 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1891601 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1891601 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:24.574 02:36:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:30:24.574 "subsystems": [ 00:30:24.574 { 00:30:24.574 "subsystem": "keyring", 00:30:24.574 "config": [ 00:30:24.574 { 00:30:24.574 "method": "keyring_file_add_key", 00:30:24.574 "params": { 00:30:24.574 "name": "key0", 00:30:24.574 "path": "/tmp/tmp.p0GMWYk5Zt" 00:30:24.574 } 00:30:24.574 } 00:30:24.574 ] 00:30:24.574 }, 00:30:24.574 { 00:30:24.574 "subsystem": "iobuf", 00:30:24.574 "config": [ 00:30:24.574 { 00:30:24.574 "method": "iobuf_set_options", 00:30:24.574 "params": { 00:30:24.574 "small_pool_count": 8192, 00:30:24.574 "large_pool_count": 1024, 00:30:24.574 "small_bufsize": 8192, 00:30:24.574 "large_bufsize": 135168 00:30:24.574 } 00:30:24.574 } 00:30:24.574 ] 00:30:24.574 }, 00:30:24.574 { 00:30:24.574 "subsystem": "sock", 00:30:24.574 "config": [ 00:30:24.574 { 00:30:24.574 "method": "sock_set_default_impl", 00:30:24.574 "params": { 00:30:24.574 "impl_name": "posix" 00:30:24.574 } 00:30:24.574 }, 00:30:24.574 { 00:30:24.574 "method": "sock_impl_set_options", 00:30:24.574 "params": { 00:30:24.574 "impl_name": "ssl", 00:30:24.574 "recv_buf_size": 4096, 00:30:24.574 "send_buf_size": 4096, 00:30:24.574 "enable_recv_pipe": true, 00:30:24.574 "enable_quickack": false, 00:30:24.574 "enable_placement_id": 0, 00:30:24.574 "enable_zerocopy_send_server": true, 00:30:24.574 "enable_zerocopy_send_client": false, 00:30:24.574 "zerocopy_threshold": 0, 00:30:24.574 "tls_version": 0, 00:30:24.574 "enable_ktls": false 00:30:24.574 } 00:30:24.574 }, 00:30:24.574 { 00:30:24.575 "method": "sock_impl_set_options", 00:30:24.575 "params": { 00:30:24.575 "impl_name": "posix", 00:30:24.575 "recv_buf_size": 2097152, 00:30:24.575 "send_buf_size": 2097152, 00:30:24.575 "enable_recv_pipe": true, 00:30:24.575 "enable_quickack": false, 00:30:24.575 "enable_placement_id": 0, 00:30:24.575 "enable_zerocopy_send_server": true, 00:30:24.575 "enable_zerocopy_send_client": false, 00:30:24.575 "zerocopy_threshold": 0, 00:30:24.575 "tls_version": 0, 00:30:24.575 "enable_ktls": false 00:30:24.575 } 00:30:24.575 } 00:30:24.575 ] 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "subsystem": "vmd", 00:30:24.575 "config": [] 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "subsystem": "accel", 00:30:24.575 "config": [ 00:30:24.575 { 00:30:24.575 "method": "accel_set_options", 00:30:24.575 "params": { 00:30:24.575 "small_cache_size": 128, 00:30:24.575 "large_cache_size": 16, 00:30:24.575 "task_count": 2048, 00:30:24.575 "sequence_count": 2048, 00:30:24.575 "buf_count": 2048 00:30:24.575 } 00:30:24.575 } 00:30:24.575 ] 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "subsystem": "bdev", 00:30:24.575 "config": [ 00:30:24.575 { 00:30:24.575 "method": "bdev_set_options", 00:30:24.575 "params": { 00:30:24.575 "bdev_io_pool_size": 65535, 00:30:24.575 "bdev_io_cache_size": 256, 00:30:24.575 "bdev_auto_examine": true, 00:30:24.575 "iobuf_small_cache_size": 128, 00:30:24.575 "iobuf_large_cache_size": 16 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "bdev_raid_set_options", 00:30:24.575 "params": { 00:30:24.575 "process_window_size_kb": 1024 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "bdev_iscsi_set_options", 00:30:24.575 "params": { 00:30:24.575 "timeout_sec": 30 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "bdev_nvme_set_options", 00:30:24.575 "params": { 00:30:24.575 "action_on_timeout": "none", 00:30:24.575 "timeout_us": 0, 00:30:24.575 "timeout_admin_us": 0, 00:30:24.575 "keep_alive_timeout_ms": 10000, 00:30:24.575 "arbitration_burst": 0, 00:30:24.575 "low_priority_weight": 0, 00:30:24.575 "medium_priority_weight": 0, 00:30:24.575 "high_priority_weight": 0, 00:30:24.575 "nvme_adminq_poll_period_us": 10000, 00:30:24.575 "nvme_ioq_poll_period_us": 0, 00:30:24.575 "io_queue_requests": 0, 00:30:24.575 "delay_cmd_submit": true, 00:30:24.575 "transport_retry_count": 4, 00:30:24.575 "bdev_retry_count": 3, 00:30:24.575 "transport_ack_timeout": 0, 00:30:24.575 "ctrlr_loss_timeout_sec": 0, 00:30:24.575 "reconnect_delay_sec": 0, 00:30:24.575 "fast_io_fail_timeout_sec": 0, 00:30:24.575 "disable_auto_failback": false, 00:30:24.575 "generate_uuids": false, 00:30:24.575 "transport_tos": 0, 00:30:24.575 "nvme_error_stat": false, 00:30:24.575 "rdma_srq_size": 0, 00:30:24.575 "io_path_stat": false, 00:30:24.575 "allow_accel_sequence": false, 00:30:24.575 "rdma_max_cq_size": 0, 00:30:24.575 "rdma_cm_event_timeout_ms": 0, 00:30:24.575 "dhchap_digests": [ 00:30:24.575 "sha256", 00:30:24.575 "sha384", 00:30:24.575 "sha512" 00:30:24.575 ], 00:30:24.575 "dhchap_dhgroups": [ 00:30:24.575 "null", 00:30:24.575 "ffdhe2048", 00:30:24.575 "ffdhe3072", 00:30:24.575 "ffdhe4096", 00:30:24.575 "ffdhe6144", 00:30:24.575 "ffdhe8192" 00:30:24.575 ] 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "bdev_nvme_set_hotplug", 00:30:24.575 "params": { 00:30:24.575 "period_us": 100000, 00:30:24.575 "enable": false 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "bdev_malloc_create", 00:30:24.575 "params": { 00:30:24.575 "name": "malloc0", 00:30:24.575 "num_blocks": 8192, 00:30:24.575 "block_size": 4096, 00:30:24.575 "physical_block_size": 4096, 00:30:24.575 "uuid": "e7c26b6f-dbc2-4248-afe8-04ff7a9f2bdc", 00:30:24.575 "optimal_io_boundary": 0 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "bdev_wait_for_examine" 00:30:24.575 } 00:30:24.575 ] 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "subsystem": "nbd", 00:30:24.575 "config": [] 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "subsystem": "scheduler", 00:30:24.575 "config": [ 00:30:24.575 { 00:30:24.575 "method": "framework_set_scheduler", 00:30:24.575 "params": { 00:30:24.575 "name": "static" 00:30:24.575 } 00:30:24.575 } 00:30:24.575 ] 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "subsystem": "nvmf", 00:30:24.575 "config": [ 00:30:24.575 { 00:30:24.575 "method": "nvmf_set_config", 00:30:24.575 "params": { 00:30:24.575 "discovery_filter": "match_any", 00:30:24.575 "admin_cmd_passthru": { 00:30:24.575 "identify_ctrlr": false 00:30:24.575 } 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "nvmf_set_max_subsystems", 00:30:24.575 "params": { 00:30:24.575 "max_subsystems": 1024 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "nvmf_set_crdt", 00:30:24.575 "params": { 00:30:24.575 "crdt1": 0, 00:30:24.575 "crdt2": 0, 00:30:24.575 "crdt3": 0 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "nvmf_create_transport", 00:30:24.575 "params": { 00:30:24.575 "trtype": "TCP", 00:30:24.575 "max_queue_depth": 128, 00:30:24.575 "max_io_qpairs_per_ctrlr": 127, 00:30:24.575 "in_capsule_data_size": 4096, 00:30:24.575 "max_io_size": 131072, 00:30:24.575 "io_unit_size": 131072, 00:30:24.575 "max_aq_depth": 128, 00:30:24.575 "num_shared_buffers": 511, 00:30:24.575 "buf_cache_size": 4294967295, 00:30:24.575 "dif_insert_or_strip": false, 00:30:24.575 "zcopy": false, 00:30:24.575 "c2h_success": false, 00:30:24.575 "sock_priority": 0, 00:30:24.575 "abort_timeout_sec": 1, 00:30:24.575 "ack_timeout": 0, 00:30:24.575 "data_wr_pool_size": 0 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "nvmf_create_subsystem", 00:30:24.575 "params": { 00:30:24.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.575 "allow_any_host": false, 00:30:24.575 "serial_number": "00000000000000000000", 00:30:24.575 "model_number": "SPDK bdev Controller", 00:30:24.575 "max_namespaces": 32, 00:30:24.575 "min_cntlid": 1, 00:30:24.575 "max_cntlid": 65519, 00:30:24.575 "ana_reporting": false 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "nvmf_subsystem_add_host", 00:30:24.575 "params": { 00:30:24.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.575 "host": "nqn.2016-06.io.spdk:host1", 00:30:24.575 "psk": "key0" 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "nvmf_subsystem_add_ns", 00:30:24.575 "params": { 00:30:24.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.575 "namespace": { 00:30:24.575 "nsid": 1, 00:30:24.575 "bdev_name": "malloc0", 00:30:24.575 "nguid": "E7C26B6FDBC24248AFE804FF7A9F2BDC", 00:30:24.575 "uuid": "e7c26b6f-dbc2-4248-afe8-04ff7a9f2bdc", 00:30:24.575 "no_auto_visible": false 00:30:24.575 } 00:30:24.575 } 00:30:24.575 }, 00:30:24.575 { 00:30:24.575 "method": "nvmf_subsystem_add_listener", 00:30:24.575 "params": { 00:30:24.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.575 "listen_address": { 00:30:24.575 "trtype": "TCP", 00:30:24.575 "adrfam": "IPv4", 00:30:24.575 "traddr": "10.0.0.2", 00:30:24.575 "trsvcid": "4420" 00:30:24.575 }, 00:30:24.575 "secure_channel": true 00:30:24.575 } 00:30:24.575 } 00:30:24.575 ] 00:30:24.575 } 00:30:24.575 ] 00:30:24.575 }' 00:30:24.575 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:24.575 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:24.834 02:36:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1891936 00:30:24.834 02:36:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:30:24.834 02:36:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1891936 00:30:24.834 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1891936 ']' 00:30:24.834 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.834 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:24.834 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.834 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:24.834 02:36:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:24.834 [2024-07-11 02:36:15.050924] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:24.834 [2024-07-11 02:36:15.051028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.834 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.834 [2024-07-11 02:36:15.115974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.834 [2024-07-11 02:36:15.203415] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.834 [2024-07-11 02:36:15.203477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.834 [2024-07-11 02:36:15.203494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.834 [2024-07-11 02:36:15.203508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.834 [2024-07-11 02:36:15.203529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.834 [2024-07-11 02:36:15.203632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.093 [2024-07-11 02:36:15.431364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.093 [2024-07-11 02:36:15.463364] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:25.093 [2024-07-11 02:36:15.472698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1892053 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1892053 /var/tmp/bdevperf.sock 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1892053 ']' 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:26.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:26.028 02:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:30:26.028 "subsystems": [ 00:30:26.028 { 00:30:26.028 "subsystem": "keyring", 00:30:26.028 "config": [ 00:30:26.028 { 00:30:26.028 "method": "keyring_file_add_key", 00:30:26.028 "params": { 00:30:26.028 "name": "key0", 00:30:26.028 "path": "/tmp/tmp.p0GMWYk5Zt" 00:30:26.028 } 00:30:26.028 } 00:30:26.028 ] 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "subsystem": "iobuf", 00:30:26.028 "config": [ 00:30:26.028 { 00:30:26.028 "method": "iobuf_set_options", 00:30:26.028 "params": { 00:30:26.028 "small_pool_count": 8192, 00:30:26.028 "large_pool_count": 1024, 00:30:26.028 "small_bufsize": 8192, 00:30:26.028 "large_bufsize": 135168 00:30:26.028 } 00:30:26.028 } 00:30:26.028 ] 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "subsystem": "sock", 00:30:26.028 "config": [ 00:30:26.028 { 00:30:26.028 "method": "sock_set_default_impl", 00:30:26.028 "params": { 00:30:26.028 "impl_name": "posix" 00:30:26.028 } 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "method": "sock_impl_set_options", 00:30:26.028 "params": { 00:30:26.028 "impl_name": "ssl", 00:30:26.028 "recv_buf_size": 4096, 00:30:26.028 "send_buf_size": 4096, 00:30:26.028 "enable_recv_pipe": true, 00:30:26.028 "enable_quickack": false, 00:30:26.028 "enable_placement_id": 0, 00:30:26.028 "enable_zerocopy_send_server": true, 00:30:26.028 "enable_zerocopy_send_client": false, 00:30:26.028 "zerocopy_threshold": 0, 00:30:26.028 "tls_version": 0, 00:30:26.028 "enable_ktls": false 00:30:26.028 } 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "method": "sock_impl_set_options", 00:30:26.028 "params": { 00:30:26.028 "impl_name": "posix", 00:30:26.028 "recv_buf_size": 2097152, 00:30:26.028 "send_buf_size": 2097152, 00:30:26.028 "enable_recv_pipe": true, 00:30:26.028 "enable_quickack": false, 00:30:26.028 "enable_placement_id": 0, 00:30:26.028 "enable_zerocopy_send_server": true, 00:30:26.028 "enable_zerocopy_send_client": false, 00:30:26.028 "zerocopy_threshold": 0, 00:30:26.028 "tls_version": 0, 00:30:26.028 "enable_ktls": false 00:30:26.028 } 00:30:26.028 } 00:30:26.028 ] 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "subsystem": "vmd", 00:30:26.028 "config": [] 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "subsystem": "accel", 00:30:26.028 "config": [ 00:30:26.028 { 00:30:26.028 "method": "accel_set_options", 00:30:26.028 "params": { 00:30:26.028 "small_cache_size": 128, 00:30:26.028 "large_cache_size": 16, 00:30:26.028 "task_count": 2048, 00:30:26.028 "sequence_count": 2048, 00:30:26.028 "buf_count": 2048 00:30:26.028 } 00:30:26.028 } 00:30:26.028 ] 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "subsystem": "bdev", 00:30:26.028 "config": [ 00:30:26.028 { 00:30:26.028 "method": "bdev_set_options", 00:30:26.028 "params": { 00:30:26.028 "bdev_io_pool_size": 65535, 00:30:26.028 "bdev_io_cache_size": 256, 00:30:26.028 "bdev_auto_examine": true, 00:30:26.028 "iobuf_small_cache_size": 128, 00:30:26.028 "iobuf_large_cache_size": 16 00:30:26.028 } 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "method": "bdev_raid_set_options", 00:30:26.028 "params": { 00:30:26.028 "process_window_size_kb": 1024 00:30:26.028 } 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "method": "bdev_iscsi_set_options", 00:30:26.028 "params": { 00:30:26.028 "timeout_sec": 30 00:30:26.028 } 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "method": "bdev_nvme_set_options", 00:30:26.028 "params": { 00:30:26.028 "action_on_timeout": "none", 00:30:26.028 "timeout_us": 0, 00:30:26.028 "timeout_admin_us": 0, 00:30:26.028 "keep_alive_timeout_ms": 10000, 00:30:26.028 "arbitration_burst": 0, 00:30:26.028 "low_priority_weight": 0, 00:30:26.028 "medium_priority_weight": 0, 00:30:26.028 "high_priority_weight": 0, 00:30:26.028 "nvme_adminq_poll_period_us": 10000, 00:30:26.028 "nvme_ioq_poll_period_us": 0, 00:30:26.028 "io_queue_requests": 512, 00:30:26.028 "delay_cmd_submit": true, 00:30:26.028 "transport_retry_count": 4, 00:30:26.028 "bdev_retry_count": 3, 00:30:26.028 "transport_ack_timeout": 0, 00:30:26.028 "ctrlr_loss_timeout_sec": 0, 00:30:26.028 "reconnect_delay_sec": 0, 00:30:26.028 "fast_io_fail_timeout_sec": 0, 00:30:26.028 "disable_auto_failback": false, 00:30:26.028 "generate_uuids": false, 00:30:26.028 "transport_tos": 0, 00:30:26.028 "nvme_error_stat": false, 00:30:26.028 "rdma_srq_size": 0, 00:30:26.028 "io_path_stat": false, 00:30:26.028 "allow_accel_sequence": false, 00:30:26.028 "rdma_max_cq_size": 0, 00:30:26.028 "rdma_cm_event_timeout_ms": 0, 00:30:26.028 "dhchap_digests": [ 00:30:26.028 "sha256", 00:30:26.028 "sha384", 00:30:26.028 "sha512" 00:30:26.028 ], 00:30:26.028 "dhchap_dhgroups": [ 00:30:26.028 "null", 00:30:26.028 "ffdhe2048", 00:30:26.028 "ffdhe3072", 00:30:26.028 "ffdhe4096", 00:30:26.028 "ffdhe6144", 00:30:26.028 "ffdhe8192" 00:30:26.028 ] 00:30:26.028 } 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "method": "bdev_nvme_attach_controller", 00:30:26.028 "params": { 00:30:26.028 "name": "nvme0", 00:30:26.028 "trtype": "TCP", 00:30:26.028 "adrfam": "IPv4", 00:30:26.028 "traddr": "10.0.0.2", 00:30:26.028 "trsvcid": "4420", 00:30:26.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:26.028 "prchk_reftag": false, 00:30:26.028 "prchk_guard": false, 00:30:26.028 "ctrlr_loss_timeout_sec": 0, 00:30:26.028 "reconnect_delay_sec": 0, 00:30:26.028 "fast_io_fail_timeout_sec": 0, 00:30:26.028 "psk": "key0", 00:30:26.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:26.028 "hdgst": false, 00:30:26.028 "ddgst": false 00:30:26.028 } 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "method": "bdev_nvme_set_hotplug", 00:30:26.028 "params": { 00:30:26.028 "period_us": 100000, 00:30:26.028 "enable": false 00:30:26.028 } 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "method": "bdev_enable_histogram", 00:30:26.028 "params": { 00:30:26.028 "name": "nvme0n1", 00:30:26.028 "enable": true 00:30:26.028 } 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "method": "bdev_wait_for_examine" 00:30:26.028 } 00:30:26.028 ] 00:30:26.028 }, 00:30:26.028 { 00:30:26.028 "subsystem": "nbd", 00:30:26.028 "config": [] 00:30:26.028 } 00:30:26.028 ] 00:30:26.028 }' 00:30:26.028 [2024-07-11 02:36:16.162958] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:26.028 [2024-07-11 02:36:16.163058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892053 ] 00:30:26.028 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.029 [2024-07-11 02:36:16.224581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.029 [2024-07-11 02:36:16.312154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.287 [2024-07-11 02:36:16.472885] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:26.287 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:26.287 02:36:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:30:26.287 02:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:26.287 02:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:30:26.546 02:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.546 02:36:16 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:26.546 Running I/O for 1 seconds... 00:30:27.920 00:30:27.920 Latency(us) 00:30:27.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.920 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:27.920 Verification LBA range: start 0x0 length 0x2000 00:30:27.920 nvme0n1 : 1.02 3013.69 11.77 0.00 0.00 41918.69 7524.50 42137.22 00:30:27.920 =================================================================================================================== 00:30:27.920 Total : 3013.69 11.77 0.00 0.00 41918.69 7524.50 42137.22 00:30:27.920 0 00:30:27.920 02:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:30:27.920 02:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:30:27.920 02:36:17 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:30:27.920 02:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:30:27.920 02:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:30:27.920 02:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:30:27.920 02:36:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:27.920 nvmf_trace.0 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1892053 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1892053 ']' 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1892053 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1892053 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1892053' 00:30:27.920 killing process with pid 1892053 00:30:27.920 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1892053 00:30:27.920 Received shutdown signal, test time was about 1.000000 seconds 00:30:27.920 00:30:27.920 Latency(us) 00:30:27.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.921 =================================================================================================================== 00:30:27.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1892053 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:27.921 rmmod nvme_tcp 00:30:27.921 rmmod nvme_fabrics 00:30:27.921 rmmod nvme_keyring 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1891936 ']' 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1891936 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1891936 ']' 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1891936 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:27.921 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1891936 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1891936' 00:30:28.180 killing process with pid 1891936 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1891936 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1891936 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:28.180 02:36:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.721 02:36:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:30.721 02:36:20 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.0PZxMMumEb /tmp/tmp.9prv834Pao /tmp/tmp.p0GMWYk5Zt 00:30:30.721 00:30:30.721 real 1m18.364s 00:30:30.721 user 2m7.039s 00:30:30.721 sys 0m24.641s 00:30:30.721 02:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:30.721 02:36:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:30.721 ************************************ 00:30:30.721 END TEST nvmf_tls 00:30:30.721 ************************************ 00:30:30.721 02:36:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:30.721 02:36:20 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:30:30.721 02:36:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:30.721 02:36:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.721 02:36:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.721 ************************************ 00:30:30.721 START TEST nvmf_fips 00:30:30.721 ************************************ 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:30:30.721 * Looking for test storage... 00:30:30.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:30:30.721 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:30:30.722 Error setting digest 00:30:30.722 0092FEBA197F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:30:30.722 0092FEBA197F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:30:30.722 02:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:30:32.101 Found 0000:08:00.0 (0x8086 - 0x159b) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:30:32.101 Found 0000:08:00.1 (0x8086 - 0x159b) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:30:32.101 Found net devices under 0000:08:00.0: cvl_0_0 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:30:32.101 Found net devices under 0000:08:00.1: cvl_0_1 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:32.101 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.359 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:32.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:30:32.360 00:30:32.360 --- 10.0.0.2 ping statistics --- 00:30:32.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.360 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:30:32.360 00:30:32.360 --- 10.0.0.1 ping statistics --- 00:30:32.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.360 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1893785 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1893785 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1893785 ']' 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:32.360 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:32.360 [2024-07-11 02:36:22.679598] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:32.360 [2024-07-11 02:36:22.679706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.360 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.360 [2024-07-11 02:36:22.744247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.618 [2024-07-11 02:36:22.830697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.618 [2024-07-11 02:36:22.830759] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.618 [2024-07-11 02:36:22.830775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.618 [2024-07-11 02:36:22.830789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.618 [2024-07-11 02:36:22.830801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.618 [2024-07-11 02:36:22.830831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:30:32.618 02:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:32.876 [2024-07-11 02:36:23.229946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.876 [2024-07-11 02:36:23.245921] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:32.876 [2024-07-11 02:36:23.246122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.876 [2024-07-11 02:36:23.275706] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:32.876 malloc0 00:30:32.876 02:36:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:33.135 02:36:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1893900 00:30:33.135 02:36:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1893900 /var/tmp/bdevperf.sock 00:30:33.135 02:36:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:33.135 02:36:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1893900 ']' 00:30:33.135 02:36:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:33.135 02:36:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:33.135 02:36:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:33.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:33.135 02:36:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:33.135 02:36:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:33.135 [2024-07-11 02:36:23.380094] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:30:33.135 [2024-07-11 02:36:23.380186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893900 ] 00:30:33.135 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.135 [2024-07-11 02:36:23.439301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.135 [2024-07-11 02:36:23.527437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.393 02:36:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:33.393 02:36:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:30:33.393 02:36:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:30:33.651 [2024-07-11 02:36:23.904638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:33.651 [2024-07-11 02:36:23.904771] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:33.651 TLSTESTn1 00:30:33.651 02:36:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.909 Running I/O for 10 seconds... 00:30:43.872 00:30:43.872 Latency(us) 00:30:43.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.872 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:43.872 Verification LBA range: start 0x0 length 0x2000 00:30:43.872 TLSTESTn1 : 10.04 3242.77 12.67 0.00 0.00 39377.51 8252.68 38641.97 00:30:43.872 =================================================================================================================== 00:30:43.872 Total : 3242.77 12.67 0.00 0.00 39377.51 8252.68 38641.97 00:30:43.872 0 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:43.872 nvmf_trace.0 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1893900 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1893900 ']' 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1893900 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:43.872 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1893900 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1893900' 00:30:44.130 killing process with pid 1893900 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1893900 00:30:44.130 Received shutdown signal, test time was about 10.000000 seconds 00:30:44.130 00:30:44.130 Latency(us) 00:30:44.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.130 =================================================================================================================== 00:30:44.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.130 [2024-07-11 02:36:34.294502] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1893900 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:44.130 rmmod nvme_tcp 00:30:44.130 rmmod nvme_fabrics 00:30:44.130 rmmod nvme_keyring 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1893785 ']' 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1893785 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1893785 ']' 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1893785 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1893785 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1893785' 00:30:44.130 killing process with pid 1893785 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1893785 00:30:44.130 [2024-07-11 02:36:34.526915] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:44.130 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1893785 00:30:44.389 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:44.389 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:44.389 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:44.389 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:44.389 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:44.389 02:36:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.389 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.389 02:36:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.333 02:36:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:46.333 02:36:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:30:46.333 00:30:46.333 real 0m16.129s 00:30:46.333 user 0m21.297s 00:30:46.333 sys 0m5.053s 00:30:46.333 02:36:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:46.333 02:36:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:30:46.333 ************************************ 00:30:46.333 END TEST nvmf_fips 00:30:46.333 ************************************ 00:30:46.591 02:36:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:46.591 02:36:36 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:30:46.591 02:36:36 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:30:46.591 02:36:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:46.591 02:36:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.591 02:36:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.591 ************************************ 00:30:46.591 START TEST nvmf_fuzz 00:30:46.591 ************************************ 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:30:46.591 * Looking for test storage... 00:30:46.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:30:46.591 02:36:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:30:48.489 Found 0000:08:00.0 (0x8086 - 0x159b) 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.489 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:30:48.490 Found 0000:08:00.1 (0x8086 - 0x159b) 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:30:48.490 Found net devices under 0000:08:00.0: cvl_0_0 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:30:48.490 Found net devices under 0000:08:00.1: cvl_0_1 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:48.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:30:48.490 00:30:48.490 --- 10.0.0.2 ping statistics --- 00:30:48.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.490 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:30:48.490 00:30:48.490 --- 10.0.0.1 ping statistics --- 00:30:48.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.490 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1896392 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1896392 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1896392 ']' 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:48.490 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:48.748 Malloc0 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:30:48.748 02:36:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:31:20.818 Fuzzing completed. Shutting down the fuzz application 00:31:20.818 00:31:20.818 Dumping successful admin opcodes: 00:31:20.818 8, 9, 10, 24, 00:31:20.818 Dumping successful io opcodes: 00:31:20.818 0, 9, 00:31:20.818 NS: 0x200003aeff00 I/O qp, Total commands completed: 458202, total successful commands: 2657, random_seed: 3331077312 00:31:20.818 NS: 0x200003aeff00 admin qp, Total commands completed: 54046, total successful commands: 434, random_seed: 1278508608 00:31:20.818 02:37:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:31:20.818 Fuzzing completed. Shutting down the fuzz application 00:31:20.818 00:31:20.818 Dumping successful admin opcodes: 00:31:20.818 24, 00:31:20.818 Dumping successful io opcodes: 00:31:20.818 00:31:20.818 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3403081729 00:31:20.818 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3403219469 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:20.818 rmmod nvme_tcp 00:31:20.818 rmmod nvme_fabrics 00:31:20.818 rmmod nvme_keyring 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1896392 ']' 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1896392 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1896392 ']' 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1896392 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1896392 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1896392' 00:31:20.818 killing process with pid 1896392 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1896392 00:31:20.818 02:37:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1896392 00:31:20.818 02:37:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.818 02:37:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.818 02:37:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.818 02:37:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.818 02:37:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.818 02:37:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.818 02:37:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.818 02:37:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.718 02:37:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:22.718 02:37:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:31:22.977 00:31:22.977 real 0m36.355s 00:31:22.977 user 0m51.606s 00:31:22.977 sys 0m13.709s 00:31:22.977 02:37:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:22.977 02:37:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:31:22.977 ************************************ 00:31:22.977 END TEST nvmf_fuzz 00:31:22.977 ************************************ 00:31:22.977 02:37:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:22.977 02:37:13 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:31:22.977 02:37:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:22.977 02:37:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.977 02:37:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.977 ************************************ 00:31:22.977 START TEST nvmf_multiconnection 00:31:22.977 ************************************ 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:31:22.977 * Looking for test storage... 00:31:22.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:31:22.977 02:37:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:31:24.880 Found 0000:08:00.0 (0x8086 - 0x159b) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:31:24.880 Found 0000:08:00.1 (0x8086 - 0x159b) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:31:24.880 Found net devices under 0000:08:00.0: cvl_0_0 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:31:24.880 Found net devices under 0000:08:00.1: cvl_0_1 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:24.880 02:37:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:24.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:31:24.880 00:31:24.880 --- 10.0.0.2 ping statistics --- 00:31:24.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.880 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:31:24.880 00:31:24.880 --- 10.0.0.1 ping statistics --- 00:31:24.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.880 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:24.880 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1900771 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1900771 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1900771 ']' 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.881 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:24.881 [2024-07-11 02:37:15.108308] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:31:24.881 [2024-07-11 02:37:15.108410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.881 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.881 [2024-07-11 02:37:15.176493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:24.881 [2024-07-11 02:37:15.269236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.881 [2024-07-11 02:37:15.269298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.881 [2024-07-11 02:37:15.269314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.881 [2024-07-11 02:37:15.269329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.881 [2024-07-11 02:37:15.269341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.881 [2024-07-11 02:37:15.272532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.881 [2024-07-11 02:37:15.272561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:24.881 [2024-07-11 02:37:15.272615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:24.881 [2024-07-11 02:37:15.272617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 [2024-07-11 02:37:15.410121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 Malloc1 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 [2024-07-11 02:37:15.462840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 Malloc2 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 Malloc3 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.140 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 Malloc4 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 Malloc5 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 Malloc6 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.399 Malloc7 00:31:25.399 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 Malloc8 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 Malloc9 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.400 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.658 Malloc10 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.658 Malloc11 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.658 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:25.659 02:37:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:26.225 02:37:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:31:26.225 02:37:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:26.225 02:37:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:26.225 02:37:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:26.225 02:37:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:28.137 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:28.137 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:28.137 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:31:28.137 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:28.137 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:28.137 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:28.137 02:37:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:28.137 02:37:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:31:28.703 02:37:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:31:28.703 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:28.703 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:28.703 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:28.703 02:37:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:30.603 02:37:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:30.603 02:37:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:30.603 02:37:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:31:30.603 02:37:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:30.603 02:37:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:30.603 02:37:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:30.603 02:37:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:30.603 02:37:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:31:31.169 02:37:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:31:31.169 02:37:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:31.169 02:37:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:31.169 02:37:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:31.169 02:37:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:33.065 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:33.065 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:33.065 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:31:33.065 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:33.065 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:33.065 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:33.065 02:37:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:33.065 02:37:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:31:33.682 02:37:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:31:33.682 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:33.682 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:33.682 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:33.682 02:37:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:35.584 02:37:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:35.584 02:37:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:35.584 02:37:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:31:35.584 02:37:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:35.584 02:37:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:35.584 02:37:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:35.584 02:37:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:35.584 02:37:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:31:36.150 02:37:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:31:36.150 02:37:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:36.150 02:37:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:36.150 02:37:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:36.150 02:37:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:38.047 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:38.047 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:38.048 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:31:38.048 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:38.048 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:38.048 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:38.048 02:37:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:38.048 02:37:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:31:38.613 02:37:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:31:38.613 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:38.613 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:38.613 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:38.613 02:37:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:41.141 02:37:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:41.141 02:37:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:41.141 02:37:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:31:41.141 02:37:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:41.141 02:37:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:41.141 02:37:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:41.141 02:37:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:41.141 02:37:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:31:41.141 02:37:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:31:41.141 02:37:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:41.141 02:37:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:41.141 02:37:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:41.141 02:37:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:43.670 02:37:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:43.670 02:37:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:43.670 02:37:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:31:43.670 02:37:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:43.670 02:37:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:43.670 02:37:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:43.670 02:37:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:43.670 02:37:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:31:43.926 02:37:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:31:43.926 02:37:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:43.926 02:37:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:43.926 02:37:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:43.926 02:37:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:45.826 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:45.826 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:45.826 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:31:45.826 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:45.826 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:45.826 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:45.826 02:37:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:45.826 02:37:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:31:46.392 02:37:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:31:46.392 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:46.392 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:46.392 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:46.392 02:37:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:48.919 02:37:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:48.919 02:37:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:48.919 02:37:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:31:48.919 02:37:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:48.919 02:37:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:48.919 02:37:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:48.919 02:37:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:48.919 02:37:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:31:49.198 02:37:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:31:49.198 02:37:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:49.198 02:37:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:49.198 02:37:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:49.198 02:37:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:51.097 02:37:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:51.097 02:37:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:51.097 02:37:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:31:51.097 02:37:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:51.097 02:37:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:51.097 02:37:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:51.097 02:37:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:31:51.097 02:37:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:31:52.029 02:37:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:31:52.029 02:37:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:31:52.029 02:37:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:52.029 02:37:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:52.029 02:37:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:31:53.922 02:37:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:53.922 02:37:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:53.922 02:37:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:31:53.922 02:37:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:53.922 02:37:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:53.922 02:37:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:31:53.922 02:37:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:31:53.922 [global] 00:31:53.922 thread=1 00:31:53.922 invalidate=1 00:31:53.922 rw=read 00:31:53.922 time_based=1 00:31:53.922 runtime=10 00:31:53.922 ioengine=libaio 00:31:53.922 direct=1 00:31:53.922 bs=262144 00:31:53.922 iodepth=64 00:31:53.922 norandommap=1 00:31:53.922 numjobs=1 00:31:53.922 00:31:53.922 [job0] 00:31:53.922 filename=/dev/nvme0n1 00:31:53.922 [job1] 00:31:53.922 filename=/dev/nvme10n1 00:31:53.922 [job2] 00:31:53.922 filename=/dev/nvme1n1 00:31:53.922 [job3] 00:31:53.922 filename=/dev/nvme2n1 00:31:53.922 [job4] 00:31:53.922 filename=/dev/nvme3n1 00:31:53.922 [job5] 00:31:53.922 filename=/dev/nvme4n1 00:31:53.922 [job6] 00:31:53.922 filename=/dev/nvme5n1 00:31:53.922 [job7] 00:31:53.922 filename=/dev/nvme6n1 00:31:53.922 [job8] 00:31:53.922 filename=/dev/nvme7n1 00:31:53.922 [job9] 00:31:53.922 filename=/dev/nvme8n1 00:31:53.922 [job10] 00:31:53.922 filename=/dev/nvme9n1 00:31:53.922 Could not set queue depth (nvme0n1) 00:31:53.922 Could not set queue depth (nvme10n1) 00:31:53.922 Could not set queue depth (nvme1n1) 00:31:53.922 Could not set queue depth (nvme2n1) 00:31:53.922 Could not set queue depth (nvme3n1) 00:31:53.922 Could not set queue depth (nvme4n1) 00:31:53.922 Could not set queue depth (nvme5n1) 00:31:53.922 Could not set queue depth (nvme6n1) 00:31:53.922 Could not set queue depth (nvme7n1) 00:31:53.923 Could not set queue depth (nvme8n1) 00:31:53.923 Could not set queue depth (nvme9n1) 00:31:54.180 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:31:54.180 fio-3.35 00:31:54.180 Starting 11 threads 00:32:06.412 00:32:06.412 job0: (groupid=0, jobs=1): err= 0: pid=1903998: Thu Jul 11 02:37:54 2024 00:32:06.412 read: IOPS=770, BW=193MiB/s (202MB/s)(1934MiB/10044msec) 00:32:06.412 slat (usec): min=11, max=130755, avg=1073.41, stdev=4112.43 00:32:06.412 clat (msec): min=15, max=280, avg=81.96, stdev=31.13 00:32:06.412 lat (msec): min=15, max=321, avg=83.03, stdev=31.53 00:32:06.412 clat percentiles (msec): 00:32:06.412 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 54], 20.00th=[ 61], 00:32:06.412 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 84], 00:32:06.412 | 70.00th=[ 90], 80.00th=[ 99], 90.00th=[ 113], 95.00th=[ 136], 00:32:06.412 | 99.00th=[ 199], 99.50th=[ 251], 99.90th=[ 262], 99.95th=[ 264], 00:32:06.412 | 99.99th=[ 279] 00:32:06.412 bw ( KiB/s): min=94208, max=266240, per=10.27%, avg=196428.80, stdev=43232.39, samples=20 00:32:06.412 iops : min= 368, max= 1040, avg=767.30, stdev=168.88, samples=20 00:32:06.412 lat (msec) : 20=0.36%, 50=6.77%, 100=74.65%, 250=17.66%, 500=0.56% 00:32:06.412 cpu : usr=0.55%, sys=2.57%, ctx=1198, majf=0, minf=4097 00:32:06.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:06.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.412 issued rwts: total=7736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.412 job1: (groupid=0, jobs=1): err= 0: pid=1904020: Thu Jul 11 02:37:54 2024 00:32:06.412 read: IOPS=510, BW=128MiB/s (134MB/s)(1292MiB/10111msec) 00:32:06.412 slat (usec): min=15, max=118046, avg=1536.06, stdev=5813.65 00:32:06.412 clat (msec): min=4, max=337, avg=123.58, stdev=45.33 00:32:06.412 lat (msec): min=4, max=337, avg=125.11, stdev=46.30 00:32:06.412 clat percentiles (msec): 00:32:06.412 | 1.00th=[ 17], 5.00th=[ 55], 10.00th=[ 69], 20.00th=[ 94], 00:32:06.412 | 30.00th=[ 103], 40.00th=[ 111], 50.00th=[ 120], 60.00th=[ 128], 00:32:06.412 | 70.00th=[ 142], 80.00th=[ 159], 90.00th=[ 180], 95.00th=[ 203], 00:32:06.412 | 99.00th=[ 253], 99.50th=[ 259], 99.90th=[ 275], 99.95th=[ 317], 00:32:06.412 | 99.99th=[ 338] 00:32:06.412 bw ( KiB/s): min=68096, max=216064, per=6.83%, avg=130636.80, stdev=36747.15, samples=20 00:32:06.412 iops : min= 266, max= 844, avg=510.30, stdev=143.54, samples=20 00:32:06.412 lat (msec) : 10=0.23%, 20=1.14%, 50=2.86%, 100=23.13%, 250=71.35% 00:32:06.412 lat (msec) : 500=1.28% 00:32:06.412 cpu : usr=0.29%, sys=1.92%, ctx=1015, majf=0, minf=3972 00:32:06.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:06.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.412 issued rwts: total=5166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.412 job2: (groupid=0, jobs=1): err= 0: pid=1904032: Thu Jul 11 02:37:54 2024 00:32:06.413 read: IOPS=560, BW=140MiB/s (147MB/s)(1408MiB/10049msec) 00:32:06.413 slat (usec): min=10, max=95100, avg=1232.20, stdev=5355.29 00:32:06.413 clat (usec): min=1915, max=301216, avg=112808.54, stdev=50276.28 00:32:06.413 lat (usec): min=1938, max=321784, avg=114040.74, stdev=51170.72 00:32:06.413 clat percentiles (msec): 00:32:06.413 | 1.00th=[ 19], 5.00th=[ 35], 10.00th=[ 50], 20.00th=[ 69], 00:32:06.413 | 30.00th=[ 83], 40.00th=[ 95], 50.00th=[ 109], 60.00th=[ 127], 00:32:06.413 | 70.00th=[ 142], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 199], 00:32:06.413 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 268], 99.95th=[ 300], 00:32:06.413 | 99.99th=[ 300] 00:32:06.413 bw ( KiB/s): min=80384, max=243200, per=7.45%, avg=142577.15, stdev=45528.11, samples=20 00:32:06.413 iops : min= 314, max= 950, avg=556.90, stdev=177.88, samples=20 00:32:06.413 lat (msec) : 2=0.02%, 4=0.27%, 10=0.30%, 20=0.87%, 50=8.77% 00:32:06.413 lat (msec) : 100=33.13%, 250=55.68%, 500=0.96% 00:32:06.413 cpu : usr=0.41%, sys=1.97%, ctx=1074, majf=0, minf=4097 00:32:06.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:06.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.413 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.413 job3: (groupid=0, jobs=1): err= 0: pid=1904033: Thu Jul 11 02:37:54 2024 00:32:06.413 read: IOPS=591, BW=148MiB/s (155MB/s)(1496MiB/10115msec) 00:32:06.413 slat (usec): min=10, max=121858, avg=812.14, stdev=4846.99 00:32:06.413 clat (usec): min=1460, max=280227, avg=107285.79, stdev=49269.11 00:32:06.413 lat (usec): min=1486, max=342377, avg=108097.93, stdev=49982.52 00:32:06.413 clat percentiles (msec): 00:32:06.413 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 44], 20.00th=[ 68], 00:32:06.413 | 30.00th=[ 80], 40.00th=[ 90], 50.00th=[ 103], 60.00th=[ 117], 00:32:06.413 | 70.00th=[ 132], 80.00th=[ 148], 90.00th=[ 174], 95.00th=[ 188], 00:32:06.413 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 262], 99.95th=[ 262], 00:32:06.413 | 99.99th=[ 279] 00:32:06.413 bw ( KiB/s): min=102912, max=195584, per=7.92%, avg=151537.05, stdev=31886.23, samples=20 00:32:06.413 iops : min= 402, max= 764, avg=591.90, stdev=124.62, samples=20 00:32:06.413 lat (msec) : 2=0.02%, 4=0.05%, 10=1.47%, 20=1.34%, 50=8.19% 00:32:06.413 lat (msec) : 100=37.43%, 250=51.05%, 500=0.45% 00:32:06.413 cpu : usr=0.28%, sys=1.76%, ctx=1144, majf=0, minf=4097 00:32:06.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:32:06.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.413 issued rwts: total=5982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.413 job4: (groupid=0, jobs=1): err= 0: pid=1904034: Thu Jul 11 02:37:54 2024 00:32:06.413 read: IOPS=654, BW=164MiB/s (171MB/s)(1654MiB/10112msec) 00:32:06.413 slat (usec): min=10, max=55001, avg=1074.43, stdev=3903.35 00:32:06.413 clat (usec): min=1253, max=217861, avg=96672.56, stdev=32930.59 00:32:06.413 lat (usec): min=1282, max=223878, avg=97746.99, stdev=33263.66 00:32:06.413 clat percentiles (msec): 00:32:06.413 | 1.00th=[ 12], 5.00th=[ 53], 10.00th=[ 61], 20.00th=[ 73], 00:32:06.413 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 103], 00:32:06.413 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 138], 95.00th=[ 157], 00:32:06.413 | 99.00th=[ 188], 99.50th=[ 201], 99.90th=[ 213], 99.95th=[ 215], 00:32:06.413 | 99.99th=[ 218] 00:32:06.413 bw ( KiB/s): min=114176, max=231424, per=8.77%, avg=167680.00, stdev=33067.47, samples=20 00:32:06.413 iops : min= 446, max= 904, avg=655.00, stdev=129.17, samples=20 00:32:06.413 lat (msec) : 2=0.23%, 4=0.08%, 10=0.47%, 20=1.77%, 50=2.24% 00:32:06.413 lat (msec) : 100=52.42%, 250=42.80% 00:32:06.413 cpu : usr=0.41%, sys=2.22%, ctx=1214, majf=0, minf=4097 00:32:06.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:06.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.413 issued rwts: total=6614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.413 job5: (groupid=0, jobs=1): err= 0: pid=1904035: Thu Jul 11 02:37:54 2024 00:32:06.413 read: IOPS=665, BW=166MiB/s (174MB/s)(1676MiB/10070msec) 00:32:06.413 slat (usec): min=11, max=89120, avg=1147.68, stdev=4928.05 00:32:06.413 clat (usec): min=791, max=303353, avg=94903.46, stdev=53974.77 00:32:06.413 lat (usec): min=841, max=303399, avg=96051.13, stdev=54815.21 00:32:06.413 clat percentiles (msec): 00:32:06.413 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 29], 20.00th=[ 39], 00:32:06.413 | 30.00th=[ 59], 40.00th=[ 81], 50.00th=[ 94], 60.00th=[ 108], 00:32:06.413 | 70.00th=[ 125], 80.00th=[ 140], 90.00th=[ 169], 95.00th=[ 190], 00:32:06.413 | 99.00th=[ 228], 99.50th=[ 243], 99.90th=[ 257], 99.95th=[ 257], 00:32:06.413 | 99.99th=[ 305] 00:32:06.413 bw ( KiB/s): min=95232, max=396800, per=8.88%, avg=169969.45, stdev=76780.33, samples=20 00:32:06.413 iops : min= 372, max= 1550, avg=663.90, stdev=299.96, samples=20 00:32:06.413 lat (usec) : 1000=0.03% 00:32:06.413 lat (msec) : 2=0.70%, 4=0.87%, 10=2.33%, 20=2.45%, 50=20.26% 00:32:06.413 lat (msec) : 100=28.04%, 250=45.08%, 500=0.25% 00:32:06.413 cpu : usr=0.42%, sys=2.40%, ctx=1172, majf=0, minf=4097 00:32:06.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:06.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.413 issued rwts: total=6702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.413 job6: (groupid=0, jobs=1): err= 0: pid=1904038: Thu Jul 11 02:37:54 2024 00:32:06.413 read: IOPS=706, BW=177MiB/s (185MB/s)(1786MiB/10108msec) 00:32:06.413 slat (usec): min=10, max=134645, avg=698.25, stdev=3813.91 00:32:06.413 clat (usec): min=858, max=238881, avg=89723.42, stdev=40867.83 00:32:06.413 lat (usec): min=884, max=238913, avg=90421.67, stdev=41271.34 00:32:06.413 clat percentiles (msec): 00:32:06.413 | 1.00th=[ 12], 5.00th=[ 21], 10.00th=[ 34], 20.00th=[ 59], 00:32:06.413 | 30.00th=[ 69], 40.00th=[ 77], 50.00th=[ 87], 60.00th=[ 96], 00:32:06.413 | 70.00th=[ 108], 80.00th=[ 125], 90.00th=[ 148], 95.00th=[ 163], 00:32:06.413 | 99.00th=[ 197], 99.50th=[ 205], 99.90th=[ 211], 99.95th=[ 213], 00:32:06.413 | 99.99th=[ 239] 00:32:06.413 bw ( KiB/s): min=109056, max=287744, per=9.48%, avg=181299.20, stdev=45959.15, samples=20 00:32:06.413 iops : min= 426, max= 1124, avg=708.20, stdev=179.53, samples=20 00:32:06.413 lat (usec) : 1000=0.01% 00:32:06.413 lat (msec) : 2=0.03%, 4=0.13%, 10=0.53%, 20=3.69%, 50=10.05% 00:32:06.413 lat (msec) : 100=50.29%, 250=35.27% 00:32:06.413 cpu : usr=0.52%, sys=2.29%, ctx=1357, majf=0, minf=4097 00:32:06.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:06.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.413 issued rwts: total=7145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.413 job7: (groupid=0, jobs=1): err= 0: pid=1904039: Thu Jul 11 02:37:54 2024 00:32:06.413 read: IOPS=918, BW=230MiB/s (241MB/s)(2312MiB/10067msec) 00:32:06.413 slat (usec): min=11, max=110803, avg=747.18, stdev=3427.81 00:32:06.413 clat (msec): min=4, max=263, avg=68.84, stdev=43.65 00:32:06.413 lat (msec): min=4, max=264, avg=69.59, stdev=43.94 00:32:06.413 clat percentiles (msec): 00:32:06.413 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 32], 00:32:06.413 | 30.00th=[ 36], 40.00th=[ 44], 50.00th=[ 57], 60.00th=[ 70], 00:32:06.413 | 70.00th=[ 88], 80.00th=[ 106], 90.00th=[ 131], 95.00th=[ 157], 00:32:06.413 | 99.00th=[ 197], 99.50th=[ 203], 99.90th=[ 213], 99.95th=[ 253], 00:32:06.413 | 99.99th=[ 264] 00:32:06.413 bw ( KiB/s): min=95232, max=502272, per=12.29%, avg=235161.60, stdev=105907.76, samples=20 00:32:06.413 iops : min= 372, max= 1962, avg=918.60, stdev=413.70, samples=20 00:32:06.413 lat (msec) : 10=1.73%, 20=2.18%, 50=40.83%, 100=32.58%, 250=22.63% 00:32:06.413 lat (msec) : 500=0.05% 00:32:06.413 cpu : usr=0.39%, sys=3.06%, ctx=1453, majf=0, minf=4097 00:32:06.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:06.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.413 issued rwts: total=9249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.413 job8: (groupid=0, jobs=1): err= 0: pid=1904040: Thu Jul 11 02:37:54 2024 00:32:06.413 read: IOPS=615, BW=154MiB/s (161MB/s)(1557MiB/10110msec) 00:32:06.413 slat (usec): min=11, max=84328, avg=1107.91, stdev=4778.06 00:32:06.413 clat (usec): min=1506, max=274020, avg=102713.17, stdev=51890.95 00:32:06.413 lat (usec): min=1530, max=325373, avg=103821.09, stdev=52769.24 00:32:06.413 clat percentiles (msec): 00:32:06.413 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 57], 00:32:06.413 | 30.00th=[ 67], 40.00th=[ 80], 50.00th=[ 95], 60.00th=[ 111], 00:32:06.413 | 70.00th=[ 131], 80.00th=[ 148], 90.00th=[ 178], 95.00th=[ 194], 00:32:06.413 | 99.00th=[ 243], 99.50th=[ 243], 99.90th=[ 249], 99.95th=[ 253], 00:32:06.413 | 99.99th=[ 275] 00:32:06.413 bw ( KiB/s): min=76288, max=278016, per=8.25%, avg=157786.25, stdev=61788.36, samples=20 00:32:06.413 iops : min= 298, max= 1086, avg=616.35, stdev=241.36, samples=20 00:32:06.413 lat (msec) : 2=0.26%, 4=0.24%, 10=0.31%, 20=0.55%, 50=14.50% 00:32:06.413 lat (msec) : 100=37.36%, 250=46.72%, 500=0.06% 00:32:06.413 cpu : usr=0.44%, sys=2.12%, ctx=1170, majf=0, minf=4097 00:32:06.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:06.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.413 issued rwts: total=6226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.413 job9: (groupid=0, jobs=1): err= 0: pid=1904041: Thu Jul 11 02:37:54 2024 00:32:06.413 read: IOPS=705, BW=176MiB/s (185MB/s)(1772MiB/10054msec) 00:32:06.413 slat (usec): min=11, max=58010, avg=1071.21, stdev=3833.44 00:32:06.413 clat (msec): min=4, max=265, avg=89.61, stdev=45.93 00:32:06.413 lat (msec): min=4, max=265, avg=90.68, stdev=46.37 00:32:06.413 clat percentiles (msec): 00:32:06.413 | 1.00th=[ 13], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 48], 00:32:06.413 | 30.00th=[ 63], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 95], 00:32:06.413 | 70.00th=[ 110], 80.00th=[ 130], 90.00th=[ 150], 95.00th=[ 167], 00:32:06.413 | 99.00th=[ 232], 99.50th=[ 243], 99.90th=[ 264], 99.95th=[ 266], 00:32:06.413 | 99.99th=[ 266] 00:32:06.413 bw ( KiB/s): min=90624, max=331776, per=9.40%, avg=179882.45, stdev=66975.07, samples=20 00:32:06.413 iops : min= 354, max= 1296, avg=702.65, stdev=261.62, samples=20 00:32:06.414 lat (msec) : 10=0.80%, 20=2.06%, 50=19.16%, 100=41.77%, 250=35.99% 00:32:06.414 lat (msec) : 500=0.23% 00:32:06.414 cpu : usr=0.47%, sys=2.45%, ctx=1228, majf=0, minf=4097 00:32:06.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:06.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.414 issued rwts: total=7089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.414 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.414 job10: (groupid=0, jobs=1): err= 0: pid=1904043: Thu Jul 11 02:37:54 2024 00:32:06.414 read: IOPS=798, BW=200MiB/s (209MB/s)(2012MiB/10072msec) 00:32:06.414 slat (usec): min=10, max=130531, avg=874.85, stdev=4209.85 00:32:06.414 clat (usec): min=640, max=258756, avg=79128.03, stdev=43056.39 00:32:06.414 lat (usec): min=658, max=258778, avg=80002.88, stdev=43490.46 00:32:06.414 clat percentiles (msec): 00:32:06.414 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 25], 20.00th=[ 39], 00:32:06.414 | 30.00th=[ 54], 40.00th=[ 67], 50.00th=[ 77], 60.00th=[ 88], 00:32:06.414 | 70.00th=[ 100], 80.00th=[ 114], 90.00th=[ 133], 95.00th=[ 159], 00:32:06.414 | 99.00th=[ 205], 99.50th=[ 213], 99.90th=[ 243], 99.95th=[ 247], 00:32:06.414 | 99.99th=[ 259] 00:32:06.414 bw ( KiB/s): min=101888, max=384512, per=10.68%, avg=204408.60, stdev=70402.36, samples=20 00:32:06.414 iops : min= 398, max= 1502, avg=798.45, stdev=275.02, samples=20 00:32:06.414 lat (usec) : 750=0.10%, 1000=0.14% 00:32:06.414 lat (msec) : 2=0.09%, 4=0.66%, 10=1.22%, 20=4.21%, 50=20.93% 00:32:06.414 lat (msec) : 100=43.49%, 250=29.13%, 500=0.04% 00:32:06.414 cpu : usr=0.51%, sys=2.67%, ctx=1372, majf=0, minf=4097 00:32:06.414 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:06.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:06.414 issued rwts: total=8047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.414 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:06.414 00:32:06.414 Run status group 0 (all jobs): 00:32:06.414 READ: bw=1868MiB/s (1959MB/s), 128MiB/s-230MiB/s (134MB/s-241MB/s), io=18.5GiB (19.8GB), run=10044-10115msec 00:32:06.414 00:32:06.414 Disk stats (read/write): 00:32:06.414 nvme0n1: ios=15268/0, merge=0/0, ticks=1242777/0, in_queue=1242777, util=97.14% 00:32:06.414 nvme10n1: ios=10161/0, merge=0/0, ticks=1238133/0, in_queue=1238133, util=97.41% 00:32:06.414 nvme1n1: ios=10973/0, merge=0/0, ticks=1236941/0, in_queue=1236941, util=97.67% 00:32:06.414 nvme2n1: ios=11770/0, merge=0/0, ticks=1246171/0, in_queue=1246171, util=97.83% 00:32:06.414 nvme3n1: ios=13002/0, merge=0/0, ticks=1237879/0, in_queue=1237879, util=97.89% 00:32:06.414 nvme4n1: ios=13215/0, merge=0/0, ticks=1235668/0, in_queue=1235668, util=98.22% 00:32:06.414 nvme5n1: ios=14105/0, merge=0/0, ticks=1242243/0, in_queue=1242243, util=98.42% 00:32:06.414 nvme6n1: ios=18331/0, merge=0/0, ticks=1238404/0, in_queue=1238404, util=98.48% 00:32:06.414 nvme7n1: ios=12242/0, merge=0/0, ticks=1238873/0, in_queue=1238873, util=98.85% 00:32:06.414 nvme8n1: ios=13932/0, merge=0/0, ticks=1241595/0, in_queue=1241595, util=99.06% 00:32:06.414 nvme9n1: ios=15948/0, merge=0/0, ticks=1240127/0, in_queue=1240127, util=99.19% 00:32:06.414 02:37:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:32:06.414 [global] 00:32:06.414 thread=1 00:32:06.414 invalidate=1 00:32:06.414 rw=randwrite 00:32:06.414 time_based=1 00:32:06.414 runtime=10 00:32:06.414 ioengine=libaio 00:32:06.414 direct=1 00:32:06.414 bs=262144 00:32:06.414 iodepth=64 00:32:06.414 norandommap=1 00:32:06.414 numjobs=1 00:32:06.414 00:32:06.414 [job0] 00:32:06.414 filename=/dev/nvme0n1 00:32:06.414 [job1] 00:32:06.414 filename=/dev/nvme10n1 00:32:06.414 [job2] 00:32:06.414 filename=/dev/nvme1n1 00:32:06.414 [job3] 00:32:06.414 filename=/dev/nvme2n1 00:32:06.414 [job4] 00:32:06.414 filename=/dev/nvme3n1 00:32:06.414 [job5] 00:32:06.414 filename=/dev/nvme4n1 00:32:06.414 [job6] 00:32:06.414 filename=/dev/nvme5n1 00:32:06.414 [job7] 00:32:06.414 filename=/dev/nvme6n1 00:32:06.414 [job8] 00:32:06.414 filename=/dev/nvme7n1 00:32:06.414 [job9] 00:32:06.414 filename=/dev/nvme8n1 00:32:06.414 [job10] 00:32:06.414 filename=/dev/nvme9n1 00:32:06.414 Could not set queue depth (nvme0n1) 00:32:06.414 Could not set queue depth (nvme10n1) 00:32:06.414 Could not set queue depth (nvme1n1) 00:32:06.414 Could not set queue depth (nvme2n1) 00:32:06.414 Could not set queue depth (nvme3n1) 00:32:06.414 Could not set queue depth (nvme4n1) 00:32:06.414 Could not set queue depth (nvme5n1) 00:32:06.414 Could not set queue depth (nvme6n1) 00:32:06.414 Could not set queue depth (nvme7n1) 00:32:06.414 Could not set queue depth (nvme8n1) 00:32:06.414 Could not set queue depth (nvme9n1) 00:32:06.414 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:32:06.414 fio-3.35 00:32:06.414 Starting 11 threads 00:32:16.443 00:32:16.443 job0: (groupid=0, jobs=1): err= 0: pid=1904883: Thu Jul 11 02:38:05 2024 00:32:16.443 write: IOPS=476, BW=119MiB/s (125MB/s)(1196MiB/10040msec); 0 zone resets 00:32:16.443 slat (usec): min=23, max=100112, avg=950.45, stdev=4070.07 00:32:16.443 clat (usec): min=836, max=449059, avg=133294.96, stdev=90697.21 00:32:16.443 lat (usec): min=892, max=454149, avg=134245.41, stdev=91558.01 00:32:16.443 clat percentiles (msec): 00:32:16.443 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 31], 20.00th=[ 53], 00:32:16.443 | 30.00th=[ 73], 40.00th=[ 87], 50.00th=[ 111], 60.00th=[ 140], 00:32:16.443 | 70.00th=[ 182], 80.00th=[ 218], 90.00th=[ 264], 95.00th=[ 292], 00:32:16.443 | 99.00th=[ 397], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 447], 00:32:16.443 | 99.99th=[ 451] 00:32:16.443 bw ( KiB/s): min=57344, max=243200, per=8.68%, avg=120901.10, stdev=47641.37, samples=20 00:32:16.443 iops : min= 224, max= 950, avg=472.25, stdev=186.07, samples=20 00:32:16.443 lat (usec) : 1000=0.02% 00:32:16.443 lat (msec) : 2=0.25%, 4=0.36%, 10=2.24%, 20=3.74%, 50=11.95% 00:32:16.443 lat (msec) : 100=26.90%, 250=42.03%, 500=12.52% 00:32:16.443 cpu : usr=1.38%, sys=1.99%, ctx=3553, majf=0, minf=1 00:32:16.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:16.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.443 issued rwts: total=0,4785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.443 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.443 job1: (groupid=0, jobs=1): err= 0: pid=1904896: Thu Jul 11 02:38:05 2024 00:32:16.443 write: IOPS=460, BW=115MiB/s (121MB/s)(1179MiB/10240msec); 0 zone resets 00:32:16.443 slat (usec): min=16, max=96897, avg=1105.50, stdev=4704.45 00:32:16.443 clat (usec): min=1056, max=678879, avg=137811.43, stdev=112687.29 00:32:16.443 lat (usec): min=1118, max=678921, avg=138916.93, stdev=113873.40 00:32:16.443 clat percentiles (msec): 00:32:16.443 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 16], 20.00th=[ 32], 00:32:16.443 | 30.00th=[ 46], 40.00th=[ 74], 50.00th=[ 116], 60.00th=[ 159], 00:32:16.443 | 70.00th=[ 197], 80.00th=[ 245], 90.00th=[ 296], 95.00th=[ 330], 00:32:16.443 | 99.00th=[ 451], 99.50th=[ 567], 99.90th=[ 676], 99.95th=[ 676], 00:32:16.443 | 99.99th=[ 676] 00:32:16.443 bw ( KiB/s): min=53248, max=232960, per=8.55%, avg=119065.60, stdev=51157.80, samples=20 00:32:16.443 iops : min= 208, max= 910, avg=465.10, stdev=199.84, samples=20 00:32:16.443 lat (msec) : 2=0.36%, 4=1.15%, 10=5.58%, 20=6.11%, 50=18.49% 00:32:16.443 lat (msec) : 100=15.06%, 250=35.06%, 500=17.50%, 750=0.70% 00:32:16.443 cpu : usr=1.44%, sys=1.54%, ctx=3615, majf=0, minf=1 00:32:16.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:16.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.443 issued rwts: total=0,4715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.443 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.443 job2: (groupid=0, jobs=1): err= 0: pid=1904899: Thu Jul 11 02:38:05 2024 00:32:16.443 write: IOPS=406, BW=102MiB/s (107MB/s)(1041MiB/10232msec); 0 zone resets 00:32:16.443 slat (usec): min=21, max=95209, avg=1263.56, stdev=4816.29 00:32:16.443 clat (usec): min=1871, max=627299, avg=155903.50, stdev=106190.22 00:32:16.443 lat (usec): min=1954, max=627362, avg=157167.06, stdev=107338.90 00:32:16.443 clat percentiles (msec): 00:32:16.443 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 30], 20.00th=[ 57], 00:32:16.443 | 30.00th=[ 75], 40.00th=[ 97], 50.00th=[ 144], 60.00th=[ 184], 00:32:16.443 | 70.00th=[ 213], 80.00th=[ 251], 90.00th=[ 300], 95.00th=[ 351], 00:32:16.443 | 99.00th=[ 393], 99.50th=[ 439], 99.90th=[ 600], 99.95th=[ 600], 00:32:16.443 | 99.99th=[ 625] 00:32:16.443 bw ( KiB/s): min=45056, max=219136, per=7.54%, avg=104965.25, stdev=45253.72, samples=20 00:32:16.443 iops : min= 176, max= 856, avg=410.00, stdev=176.80, samples=20 00:32:16.443 lat (msec) : 2=0.02%, 4=0.67%, 10=2.26%, 20=3.60%, 50=9.78% 00:32:16.443 lat (msec) : 100=24.29%, 250=39.30%, 500=19.75%, 750=0.34% 00:32:16.443 cpu : usr=1.23%, sys=1.56%, ctx=2997, majf=0, minf=1 00:32:16.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:32:16.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.443 issued rwts: total=0,4163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.443 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.443 job3: (groupid=0, jobs=1): err= 0: pid=1904905: Thu Jul 11 02:38:05 2024 00:32:16.443 write: IOPS=499, BW=125MiB/s (131MB/s)(1271MiB/10170msec); 0 zone resets 00:32:16.443 slat (usec): min=19, max=70474, avg=941.55, stdev=3888.81 00:32:16.443 clat (usec): min=898, max=385761, avg=127003.11, stdev=89610.45 00:32:16.443 lat (usec): min=951, max=385787, avg=127944.66, stdev=90577.32 00:32:16.443 clat percentiles (msec): 00:32:16.443 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 42], 00:32:16.443 | 30.00th=[ 61], 40.00th=[ 90], 50.00th=[ 120], 60.00th=[ 138], 00:32:16.443 | 70.00th=[ 159], 80.00th=[ 201], 90.00th=[ 268], 95.00th=[ 309], 00:32:16.443 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:32:16.443 | 99.99th=[ 384] 00:32:16.443 bw ( KiB/s): min=49152, max=219136, per=9.23%, avg=128512.00, stdev=46873.50, samples=20 00:32:16.443 iops : min= 192, max= 856, avg=502.00, stdev=183.10, samples=20 00:32:16.443 lat (usec) : 1000=0.06% 00:32:16.443 lat (msec) : 2=0.26%, 4=0.59%, 10=3.15%, 20=4.49%, 50=16.70% 00:32:16.443 lat (msec) : 100=18.30%, 250=45.17%, 500=11.29% 00:32:16.443 cpu : usr=1.62%, sys=1.79%, ctx=3928, majf=0, minf=1 00:32:16.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:16.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.443 issued rwts: total=0,5083,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.443 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.443 job4: (groupid=0, jobs=1): err= 0: pid=1904907: Thu Jul 11 02:38:05 2024 00:32:16.443 write: IOPS=581, BW=145MiB/s (152MB/s)(1464MiB/10074msec); 0 zone resets 00:32:16.443 slat (usec): min=20, max=37367, avg=743.58, stdev=3051.45 00:32:16.443 clat (usec): min=1017, max=359369, avg=109291.46, stdev=89157.15 00:32:16.443 lat (usec): min=1064, max=362732, avg=110035.04, stdev=89928.06 00:32:16.443 clat percentiles (msec): 00:32:16.443 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 12], 20.00th=[ 26], 00:32:16.443 | 30.00th=[ 45], 40.00th=[ 61], 50.00th=[ 85], 60.00th=[ 114], 00:32:16.443 | 70.00th=[ 150], 80.00th=[ 194], 90.00th=[ 247], 95.00th=[ 284], 00:32:16.443 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 359], 00:32:16.443 | 99.99th=[ 359] 00:32:16.443 bw ( KiB/s): min=54784, max=361472, per=10.65%, avg=148311.65, stdev=75361.27, samples=20 00:32:16.443 iops : min= 214, max= 1412, avg=579.30, stdev=294.40, samples=20 00:32:16.443 lat (msec) : 2=0.39%, 4=2.19%, 10=6.45%, 20=7.55%, 50=17.81% 00:32:16.443 lat (msec) : 100=20.34%, 250=35.93%, 500=9.34% 00:32:16.443 cpu : usr=1.54%, sys=2.30%, ctx=4570, majf=0, minf=1 00:32:16.443 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:32:16.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.443 issued rwts: total=0,5856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.443 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.443 job5: (groupid=0, jobs=1): err= 0: pid=1904908: Thu Jul 11 02:38:05 2024 00:32:16.443 write: IOPS=461, BW=115MiB/s (121MB/s)(1183MiB/10243msec); 0 zone resets 00:32:16.443 slat (usec): min=20, max=164106, avg=828.17, stdev=4523.33 00:32:16.443 clat (msec): min=3, max=625, avg=137.58, stdev=93.48 00:32:16.443 lat (msec): min=3, max=625, avg=138.40, stdev=94.37 00:32:16.443 clat percentiles (msec): 00:32:16.443 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 28], 20.00th=[ 42], 00:32:16.444 | 30.00th=[ 62], 40.00th=[ 103], 50.00th=[ 142], 60.00th=[ 167], 00:32:16.444 | 70.00th=[ 186], 80.00th=[ 209], 90.00th=[ 253], 95.00th=[ 296], 00:32:16.444 | 99.00th=[ 401], 99.50th=[ 447], 99.90th=[ 592], 99.95th=[ 600], 00:32:16.444 | 99.99th=[ 625] 00:32:16.444 bw ( KiB/s): min=40960, max=215552, per=8.58%, avg=119465.25, stdev=38362.69, samples=20 00:32:16.444 iops : min= 160, max= 842, avg=466.65, stdev=149.84, samples=20 00:32:16.444 lat (msec) : 4=0.11%, 10=1.61%, 20=4.04%, 50=20.49%, 100=13.38% 00:32:16.444 lat (msec) : 250=50.04%, 500=9.96%, 750=0.38% 00:32:16.444 cpu : usr=1.45%, sys=1.72%, ctx=3775, majf=0, minf=1 00:32:16.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:16.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.444 issued rwts: total=0,4730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.444 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.444 job6: (groupid=0, jobs=1): err= 0: pid=1904909: Thu Jul 11 02:38:05 2024 00:32:16.444 write: IOPS=558, BW=140MiB/s (146MB/s)(1417MiB/10156msec); 0 zone resets 00:32:16.444 slat (usec): min=24, max=145578, avg=658.49, stdev=4057.89 00:32:16.444 clat (usec): min=942, max=504285, avg=113942.63, stdev=100893.07 00:32:16.444 lat (usec): min=972, max=504328, avg=114601.13, stdev=101602.64 00:32:16.444 clat percentiles (msec): 00:32:16.444 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 33], 00:32:16.444 | 30.00th=[ 42], 40.00th=[ 52], 50.00th=[ 74], 60.00th=[ 116], 00:32:16.444 | 70.00th=[ 150], 80.00th=[ 197], 90.00th=[ 257], 95.00th=[ 292], 00:32:16.444 | 99.00th=[ 472], 99.50th=[ 493], 99.90th=[ 506], 99.95th=[ 506], 00:32:16.444 | 99.99th=[ 506] 00:32:16.444 bw ( KiB/s): min=41472, max=377344, per=10.30%, avg=143513.60, stdev=77263.55, samples=20 00:32:16.444 iops : min= 162, max= 1474, avg=560.60, stdev=301.81, samples=20 00:32:16.444 lat (usec) : 1000=0.02% 00:32:16.444 lat (msec) : 2=0.07%, 4=0.32%, 10=2.86%, 20=7.53%, 50=28.33% 00:32:16.444 lat (msec) : 100=17.22%, 250=32.44%, 500=11.06%, 750=0.16% 00:32:16.444 cpu : usr=1.61%, sys=2.24%, ctx=4446, majf=0, minf=1 00:32:16.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:16.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.444 issued rwts: total=0,5669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.444 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.444 job7: (groupid=0, jobs=1): err= 0: pid=1904911: Thu Jul 11 02:38:05 2024 00:32:16.444 write: IOPS=529, BW=132MiB/s (139MB/s)(1335MiB/10079msec); 0 zone resets 00:32:16.444 slat (usec): min=17, max=181998, avg=632.64, stdev=4524.05 00:32:16.444 clat (usec): min=886, max=413806, avg=120131.60, stdev=89942.66 00:32:16.444 lat (usec): min=950, max=413838, avg=120764.24, stdev=90322.14 00:32:16.444 clat percentiles (msec): 00:32:16.444 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 17], 20.00th=[ 33], 00:32:16.444 | 30.00th=[ 53], 40.00th=[ 79], 50.00th=[ 101], 60.00th=[ 136], 00:32:16.444 | 70.00th=[ 171], 80.00th=[ 203], 90.00th=[ 241], 95.00th=[ 271], 00:32:16.444 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 409], 99.95th=[ 414], 00:32:16.444 | 99.99th=[ 414] 00:32:16.444 bw ( KiB/s): min=82944, max=236544, per=9.70%, avg=135054.25, stdev=40909.76, samples=20 00:32:16.444 iops : min= 324, max= 924, avg=527.55, stdev=159.80, samples=20 00:32:16.444 lat (usec) : 1000=0.07% 00:32:16.444 lat (msec) : 2=0.64%, 4=1.61%, 10=4.33%, 20=5.23%, 50=16.92% 00:32:16.444 lat (msec) : 100=21.09%, 250=42.79%, 500=7.32% 00:32:16.444 cpu : usr=1.63%, sys=1.93%, ctx=4143, majf=0, minf=1 00:32:16.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:16.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.444 issued rwts: total=0,5338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.444 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.444 job8: (groupid=0, jobs=1): err= 0: pid=1904912: Thu Jul 11 02:38:05 2024 00:32:16.444 write: IOPS=468, BW=117MiB/s (123MB/s)(1199MiB/10234msec); 0 zone resets 00:32:16.444 slat (usec): min=18, max=55824, avg=829.43, stdev=3643.40 00:32:16.444 clat (usec): min=942, max=586317, avg=135635.97, stdev=104334.48 00:32:16.444 lat (usec): min=974, max=586408, avg=136465.41, stdev=105239.06 00:32:16.444 clat percentiles (msec): 00:32:16.444 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 14], 20.00th=[ 34], 00:32:16.444 | 30.00th=[ 56], 40.00th=[ 90], 50.00th=[ 115], 60.00th=[ 153], 00:32:16.444 | 70.00th=[ 186], 80.00th=[ 234], 90.00th=[ 288], 95.00th=[ 317], 00:32:16.444 | 99.00th=[ 380], 99.50th=[ 443], 99.90th=[ 567], 99.95th=[ 584], 00:32:16.444 | 99.99th=[ 584] 00:32:16.444 bw ( KiB/s): min=51712, max=226816, per=8.70%, avg=121196.90, stdev=59001.90, samples=20 00:32:16.444 iops : min= 202, max= 886, avg=473.40, stdev=230.50, samples=20 00:32:16.444 lat (usec) : 1000=0.06% 00:32:16.444 lat (msec) : 2=0.81%, 4=2.04%, 10=4.73%, 20=5.82%, 50=14.68% 00:32:16.444 lat (msec) : 100=16.09%, 250=38.11%, 500=17.28%, 750=0.38% 00:32:16.444 cpu : usr=1.35%, sys=1.85%, ctx=3775, majf=0, minf=1 00:32:16.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:16.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.444 issued rwts: total=0,4797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.444 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.444 job9: (groupid=0, jobs=1): err= 0: pid=1904913: Thu Jul 11 02:38:05 2024 00:32:16.444 write: IOPS=565, BW=141MiB/s (148MB/s)(1449MiB/10245msec); 0 zone resets 00:32:16.444 slat (usec): min=19, max=92780, avg=682.78, stdev=3451.93 00:32:16.444 clat (usec): min=954, max=632360, avg=112331.49, stdev=97865.36 00:32:16.444 lat (usec): min=994, max=632444, avg=113014.27, stdev=98799.37 00:32:16.444 clat percentiles (msec): 00:32:16.444 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 16], 20.00th=[ 33], 00:32:16.444 | 30.00th=[ 44], 40.00th=[ 53], 50.00th=[ 86], 60.00th=[ 121], 00:32:16.444 | 70.00th=[ 146], 80.00th=[ 182], 90.00th=[ 245], 95.00th=[ 321], 00:32:16.444 | 99.00th=[ 414], 99.50th=[ 443], 99.90th=[ 609], 99.95th=[ 634], 00:32:16.444 | 99.99th=[ 634] 00:32:16.444 bw ( KiB/s): min=49152, max=366592, per=10.54%, avg=146764.80, stdev=72032.58, samples=20 00:32:16.444 iops : min= 192, max= 1432, avg=573.30, stdev=281.38, samples=20 00:32:16.444 lat (usec) : 1000=0.02% 00:32:16.444 lat (msec) : 2=0.55%, 4=1.67%, 10=4.19%, 20=6.18%, 50=25.50% 00:32:16.444 lat (msec) : 100=16.34%, 250=36.19%, 500=9.04%, 750=0.33% 00:32:16.444 cpu : usr=1.55%, sys=2.05%, ctx=4437, majf=0, minf=1 00:32:16.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:16.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.444 issued rwts: total=0,5797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.444 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.444 job10: (groupid=0, jobs=1): err= 0: pid=1904914: Thu Jul 11 02:38:05 2024 00:32:16.444 write: IOPS=472, BW=118MiB/s (124MB/s)(1204MiB/10199msec); 0 zone resets 00:32:16.444 slat (usec): min=17, max=78198, avg=718.32, stdev=3843.87 00:32:16.444 clat (usec): min=826, max=633145, avg=134729.66, stdev=108871.01 00:32:16.444 lat (usec): min=857, max=633224, avg=135447.98, stdev=109741.10 00:32:16.444 clat percentiles (msec): 00:32:16.444 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 13], 20.00th=[ 26], 00:32:16.444 | 30.00th=[ 42], 40.00th=[ 84], 50.00th=[ 122], 60.00th=[ 159], 00:32:16.444 | 70.00th=[ 184], 80.00th=[ 230], 90.00th=[ 296], 95.00th=[ 330], 00:32:16.444 | 99.00th=[ 409], 99.50th=[ 430], 99.90th=[ 625], 99.95th=[ 625], 00:32:16.444 | 99.99th=[ 634] 00:32:16.444 bw ( KiB/s): min=47104, max=290816, per=8.73%, avg=121651.20, stdev=58745.55, samples=20 00:32:16.444 iops : min= 184, max= 1136, avg=475.20, stdev=229.47, samples=20 00:32:16.444 lat (usec) : 1000=0.21% 00:32:16.444 lat (msec) : 2=0.64%, 4=1.97%, 10=4.65%, 20=8.54%, 50=16.76% 00:32:16.444 lat (msec) : 100=11.03%, 250=39.46%, 500=16.47%, 750=0.27% 00:32:16.444 cpu : usr=1.53%, sys=1.87%, ctx=3947, majf=0, minf=1 00:32:16.444 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:16.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:32:16.445 issued rwts: total=0,4815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.445 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:16.445 00:32:16.445 Run status group 0 (all jobs): 00:32:16.445 WRITE: bw=1360MiB/s (1426MB/s), 102MiB/s-145MiB/s (107MB/s-152MB/s), io=13.6GiB (14.6GB), run=10040-10245msec 00:32:16.445 00:32:16.445 Disk stats (read/write): 00:32:16.445 nvme0n1: ios=49/9283, merge=0/0, ticks=279/1237619, in_queue=1237898, util=98.45% 00:32:16.445 nvme10n1: ios=43/9387, merge=0/0, ticks=566/1251026, in_queue=1251592, util=99.52% 00:32:16.445 nvme1n1: ios=44/8287, merge=0/0, ticks=1210/1251290, in_queue=1252500, util=100.00% 00:32:16.445 nvme2n1: ios=47/9962, merge=0/0, ticks=282/1229641, in_queue=1229923, util=100.00% 00:32:16.445 nvme3n1: ios=25/11427, merge=0/0, ticks=1480/1235860, in_queue=1237340, util=100.00% 00:32:16.445 nvme4n1: ios=43/9399, merge=0/0, ticks=2967/1233974, in_queue=1236941, util=100.00% 00:32:16.445 nvme5n1: ios=0/11167, merge=0/0, ticks=0/1218009, in_queue=1218009, util=98.28% 00:32:16.445 nvme6n1: ios=37/10460, merge=0/0, ticks=2238/1207572, in_queue=1209810, util=100.00% 00:32:16.445 nvme7n1: ios=0/9553, merge=0/0, ticks=0/1255840, in_queue=1255840, util=98.78% 00:32:16.445 nvme8n1: ios=29/11541, merge=0/0, ticks=865/1253573, in_queue=1254438, util=99.81% 00:32:16.445 nvme9n1: ios=32/9616, merge=0/0, ticks=688/1260215, in_queue=1260903, util=100.00% 00:32:16.445 02:38:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:32:16.445 02:38:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:32:16.445 02:38:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:16.445 02:38:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:16.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:32:16.445 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:16.445 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:32:16.704 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:16.704 02:38:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:32:16.964 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:16.964 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:32:17.224 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:17.224 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:32:17.483 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:32:17.483 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:17.483 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:32:17.742 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:17.742 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:32:17.742 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:17.742 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:32:17.742 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.742 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:17.742 02:38:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.742 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:17.742 02:38:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:32:17.742 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:32:17.742 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:17.742 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:32:18.001 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:32:18.001 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.001 rmmod nvme_tcp 00:32:18.001 rmmod nvme_fabrics 00:32:18.001 rmmod nvme_keyring 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1900771 ']' 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1900771 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1900771 ']' 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1900771 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1900771 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1900771' 00:32:18.001 killing process with pid 1900771 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1900771 00:32:18.001 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1900771 00:32:18.569 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:18.569 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:18.569 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:18.569 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:18.569 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:18.569 02:38:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.569 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.569 02:38:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.482 02:38:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:20.482 00:32:20.482 real 0m57.576s 00:32:20.482 user 3m9.584s 00:32:20.482 sys 0m25.481s 00:32:20.482 02:38:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:20.482 02:38:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:32:20.482 ************************************ 00:32:20.482 END TEST nvmf_multiconnection 00:32:20.482 ************************************ 00:32:20.482 02:38:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:20.482 02:38:10 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:32:20.482 02:38:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:20.482 02:38:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:20.482 02:38:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:20.482 ************************************ 00:32:20.482 START TEST nvmf_initiator_timeout 00:32:20.482 ************************************ 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:32:20.482 * Looking for test storage... 00:32:20.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.482 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:20.741 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:20.742 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:32:20.742 02:38:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:32:22.648 Found 0000:08:00.0 (0x8086 - 0x159b) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:32:22.648 Found 0000:08:00.1 (0x8086 - 0x159b) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:32:22.648 Found net devices under 0000:08:00.0: cvl_0_0 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:32:22.648 Found net devices under 0000:08:00.1: cvl_0_1 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.648 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:22.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:32:22.649 00:32:22.649 --- 10.0.0.2 ping statistics --- 00:32:22.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.649 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:32:22.649 00:32:22.649 --- 10.0.0.1 ping statistics --- 00:32:22.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.649 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1907544 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1907544 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1907544 ']' 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:22.649 02:38:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.649 [2024-07-11 02:38:12.805194] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:32:22.649 [2024-07-11 02:38:12.805291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.649 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.649 [2024-07-11 02:38:12.873016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:22.649 [2024-07-11 02:38:12.964216] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:22.649 [2024-07-11 02:38:12.964281] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:22.649 [2024-07-11 02:38:12.964297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:22.649 [2024-07-11 02:38:12.964310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:22.649 [2024-07-11 02:38:12.964322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:22.649 [2024-07-11 02:38:12.964699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.649 [2024-07-11 02:38:12.964724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:22.649 [2024-07-11 02:38:12.964772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:22.649 [2024-07-11 02:38:12.964775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 Malloc0 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 Delay0 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 [2024-07-11 02:38:13.139312] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:22.908 [2024-07-11 02:38:13.167553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.908 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:23.473 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:32:23.473 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:32:23.473 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:23.473 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:23.473 02:38:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:32:25.372 02:38:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:25.372 02:38:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:25.372 02:38:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:25.372 02:38:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:25.372 02:38:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:25.372 02:38:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:32:25.372 02:38:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1907791 00:32:25.372 02:38:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:32:25.372 02:38:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:32:25.372 [global] 00:32:25.372 thread=1 00:32:25.372 invalidate=1 00:32:25.372 rw=write 00:32:25.372 time_based=1 00:32:25.372 runtime=60 00:32:25.372 ioengine=libaio 00:32:25.372 direct=1 00:32:25.372 bs=4096 00:32:25.372 iodepth=1 00:32:25.372 norandommap=0 00:32:25.372 numjobs=1 00:32:25.372 00:32:25.372 verify_dump=1 00:32:25.372 verify_backlog=512 00:32:25.372 verify_state_save=0 00:32:25.372 do_verify=1 00:32:25.372 verify=crc32c-intel 00:32:25.372 [job0] 00:32:25.372 filename=/dev/nvme0n1 00:32:25.372 Could not set queue depth (nvme0n1) 00:32:25.629 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:25.629 fio-3.35 00:32:25.629 Starting 1 thread 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:28.905 true 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:28.905 true 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:28.905 true 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:28.905 true 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.905 02:38:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:31.432 true 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:31.432 true 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:31.432 true 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:32:31.432 true 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:32:31.432 02:38:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1907791 00:33:27.674 00:33:27.674 job0: (groupid=0, jobs=1): err= 0: pid=1907844: Thu Jul 11 02:39:16 2024 00:33:27.674 read: IOPS=126, BW=505KiB/s (518kB/s)(29.6MiB/60007msec) 00:33:27.674 slat (nsec): min=5160, max=58323, avg=10259.11, stdev=5838.52 00:33:27.674 clat (usec): min=223, max=40888k, avg=7658.55, stdev=469597.75 00:33:27.674 lat (usec): min=231, max=40888k, avg=7668.81, stdev=469597.89 00:33:27.674 clat percentiles (usec): 00:33:27.674 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 247], 00:33:27.674 | 20.00th=[ 253], 30.00th=[ 258], 40.00th=[ 262], 00:33:27.674 | 50.00th=[ 265], 60.00th=[ 269], 70.00th=[ 277], 00:33:27.674 | 80.00th=[ 285], 90.00th=[ 351], 95.00th=[ 506], 00:33:27.674 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:33:27.674 | 99.95th=[ 42206], 99.99th=[17112761] 00:33:27.674 write: IOPS=127, BW=512KiB/s (524kB/s)(30.0MiB/60007msec); 0 zone resets 00:33:27.674 slat (usec): min=6, max=30595, avg=16.48, stdev=349.03 00:33:27.674 clat (usec): min=174, max=3765, avg=218.67, stdev=50.95 00:33:27.674 lat (usec): min=182, max=30897, avg=235.15, stdev=353.74 00:33:27.674 clat percentiles (usec): 00:33:27.674 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:33:27.674 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:33:27.674 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 285], 00:33:27.674 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 383], 99.95th=[ 701], 00:33:27.674 | 99.99th=[ 3752] 00:33:27.674 bw ( KiB/s): min= 176, max= 8192, per=100.00%, avg=5120.00, stdev=2864.60, samples=12 00:33:27.674 iops : min= 44, max= 2048, avg=1280.00, stdev=716.15, samples=12 00:33:27.674 lat (usec) : 250=52.89%, 500=44.57%, 750=0.10%, 1000=0.01% 00:33:27.674 lat (msec) : 2=0.01%, 4=0.01%, 50=2.40%, >=2000=0.01% 00:33:27.675 cpu : usr=0.17%, sys=0.29%, ctx=15269, majf=0, minf=1 00:33:27.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.675 issued rwts: total=7583,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:27.675 00:33:27.675 Run status group 0 (all jobs): 00:33:27.675 READ: bw=505KiB/s (518kB/s), 505KiB/s-505KiB/s (518kB/s-518kB/s), io=29.6MiB (31.1MB), run=60007-60007msec 00:33:27.675 WRITE: bw=512KiB/s (524kB/s), 512KiB/s-512KiB/s (524kB/s-524kB/s), io=30.0MiB (31.5MB), run=60007-60007msec 00:33:27.675 00:33:27.675 Disk stats (read/write): 00:33:27.675 nvme0n1: ios=7632/7680, merge=0/0, ticks=18279/1643, in_queue=19922, util=99.66% 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:27.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:33:27.675 nvmf hotplug test: fio successful as expected 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:27.675 rmmod nvme_tcp 00:33:27.675 rmmod nvme_fabrics 00:33:27.675 rmmod nvme_keyring 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1907544 ']' 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1907544 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1907544 ']' 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1907544 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1907544 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1907544' 00:33:27.675 killing process with pid 1907544 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1907544 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1907544 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:27.675 02:39:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.258 02:39:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:28.258 00:33:28.258 real 1m7.640s 00:33:28.258 user 4m8.344s 00:33:28.258 sys 0m6.682s 00:33:28.258 02:39:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:28.258 02:39:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:33:28.258 ************************************ 00:33:28.258 END TEST nvmf_initiator_timeout 00:33:28.258 ************************************ 00:33:28.258 02:39:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:28.258 02:39:18 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:33:28.258 02:39:18 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:33:28.258 02:39:18 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:33:28.258 02:39:18 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:33:28.258 02:39:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:30.158 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:30.158 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:30.158 Found net devices under 0000:08:00.0: cvl_0_0 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.158 02:39:20 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:30.159 Found net devices under 0000:08:00.1: cvl_0_1 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:33:30.159 02:39:20 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:33:30.159 02:39:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:30.159 02:39:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:30.159 02:39:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:30.159 ************************************ 00:33:30.159 START TEST nvmf_perf_adq 00:33:30.159 ************************************ 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:33:30.159 * Looking for test storage... 00:33:30.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:33:30.159 02:39:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:31.538 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:31.538 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:31.538 Found net devices under 0000:08:00.0: cvl_0_0 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:31.538 Found net devices under 0000:08:00.1: cvl_0_1 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:33:31.538 02:39:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:33:31.798 02:39:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:33:33.699 02:39:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.967 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:38.968 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:38.968 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:38.968 Found net devices under 0000:08:00.0: cvl_0_0 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:38.968 Found net devices under 0000:08:00.1: cvl_0_1 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:38.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:33:38.968 00:33:38.968 --- 10.0.0.2 ping statistics --- 00:33:38.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.968 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:33:38.968 00:33:38.968 --- 10.0.0.1 ping statistics --- 00:33:38.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.968 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1917387 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1917387 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1917387 ']' 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:38.968 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:38.968 [2024-07-11 02:39:29.255469] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:38.968 [2024-07-11 02:39:29.255568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.968 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.968 [2024-07-11 02:39:29.322877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:39.227 [2024-07-11 02:39:29.410438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.227 [2024-07-11 02:39:29.410494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.227 [2024-07-11 02:39:29.410516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.227 [2024-07-11 02:39:29.410531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.227 [2024-07-11 02:39:29.410543] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.227 [2024-07-11 02:39:29.410628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.227 [2024-07-11 02:39:29.410705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:39.227 [2024-07-11 02:39:29.410757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:39.227 [2024-07-11 02:39:29.410765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.227 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:39.485 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.485 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:33:39.485 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.485 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:39.485 [2024-07-11 02:39:29.680125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.485 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:39.486 Malloc1 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:39.486 [2024-07-11 02:39:29.730084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1917467 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:33:39.486 02:39:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:39.486 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:33:41.386 "tick_rate": 2700000000, 00:33:41.386 "poll_groups": [ 00:33:41.386 { 00:33:41.386 "name": "nvmf_tgt_poll_group_000", 00:33:41.386 "admin_qpairs": 1, 00:33:41.386 "io_qpairs": 1, 00:33:41.386 "current_admin_qpairs": 1, 00:33:41.386 "current_io_qpairs": 1, 00:33:41.386 "pending_bdev_io": 0, 00:33:41.386 "completed_nvme_io": 17866, 00:33:41.386 "transports": [ 00:33:41.386 { 00:33:41.386 "trtype": "TCP" 00:33:41.386 } 00:33:41.386 ] 00:33:41.386 }, 00:33:41.386 { 00:33:41.386 "name": "nvmf_tgt_poll_group_001", 00:33:41.386 "admin_qpairs": 0, 00:33:41.386 "io_qpairs": 1, 00:33:41.386 "current_admin_qpairs": 0, 00:33:41.386 "current_io_qpairs": 1, 00:33:41.386 "pending_bdev_io": 0, 00:33:41.386 "completed_nvme_io": 18832, 00:33:41.386 "transports": [ 00:33:41.386 { 00:33:41.386 "trtype": "TCP" 00:33:41.386 } 00:33:41.386 ] 00:33:41.386 }, 00:33:41.386 { 00:33:41.386 "name": "nvmf_tgt_poll_group_002", 00:33:41.386 "admin_qpairs": 0, 00:33:41.386 "io_qpairs": 1, 00:33:41.386 "current_admin_qpairs": 0, 00:33:41.386 "current_io_qpairs": 1, 00:33:41.386 "pending_bdev_io": 0, 00:33:41.386 "completed_nvme_io": 18905, 00:33:41.386 "transports": [ 00:33:41.386 { 00:33:41.386 "trtype": "TCP" 00:33:41.386 } 00:33:41.386 ] 00:33:41.386 }, 00:33:41.386 { 00:33:41.386 "name": "nvmf_tgt_poll_group_003", 00:33:41.386 "admin_qpairs": 0, 00:33:41.386 "io_qpairs": 1, 00:33:41.386 "current_admin_qpairs": 0, 00:33:41.386 "current_io_qpairs": 1, 00:33:41.386 "pending_bdev_io": 0, 00:33:41.386 "completed_nvme_io": 19230, 00:33:41.386 "transports": [ 00:33:41.386 { 00:33:41.386 "trtype": "TCP" 00:33:41.386 } 00:33:41.386 ] 00:33:41.386 } 00:33:41.386 ] 00:33:41.386 }' 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:33:41.386 02:39:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1917467 00:33:49.496 Initializing NVMe Controllers 00:33:49.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:49.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:33:49.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:33:49.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:33:49.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:33:49.496 Initialization complete. Launching workers. 00:33:49.496 ======================================================== 00:33:49.496 Latency(us) 00:33:49.496 Device Information : IOPS MiB/s Average min max 00:33:49.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9984.00 39.00 6411.71 2688.98 10006.45 00:33:49.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9934.50 38.81 6443.24 2641.39 9439.14 00:33:49.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9509.80 37.15 6730.70 2143.74 11150.89 00:33:49.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10101.60 39.46 6335.52 2830.32 10027.82 00:33:49.496 ======================================================== 00:33:49.496 Total : 39529.88 154.41 6476.90 2143.74 11150.89 00:33:49.496 00:33:49.496 02:39:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:33:49.496 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:49.496 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:33:49.496 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:49.496 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:33:49.496 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:49.496 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:49.496 rmmod nvme_tcp 00:33:49.756 rmmod nvme_fabrics 00:33:49.756 rmmod nvme_keyring 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1917387 ']' 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1917387 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1917387 ']' 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1917387 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1917387 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1917387' 00:33:49.756 killing process with pid 1917387 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1917387 00:33:49.756 02:39:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1917387 00:33:50.016 02:39:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:50.017 02:39:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:50.017 02:39:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:50.017 02:39:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:50.017 02:39:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:50.017 02:39:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.017 02:39:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:50.017 02:39:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.924 02:39:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:51.924 02:39:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:33:51.924 02:39:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:33:52.493 02:39:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:33:54.395 02:39:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:59.706 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:59.706 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:59.706 Found net devices under 0000:08:00.0: cvl_0_0 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:59.706 Found net devices under 0000:08:00.1: cvl_0_1 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:59.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:59.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:33:59.706 00:33:59.706 --- 10.0.0.2 ping statistics --- 00:33:59.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.706 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:59.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:59.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:33:59.706 00:33:59.706 --- 10.0.0.1 ping statistics --- 00:33:59.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.706 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:59.706 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:33:59.707 net.core.busy_poll = 1 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:33:59.707 net.core.busy_read = 1 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1919465 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1919465 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1919465 ']' 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:59.707 02:39:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:59.707 [2024-07-11 02:39:49.979869] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:33:59.707 [2024-07-11 02:39:49.979971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.707 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.707 [2024-07-11 02:39:50.053004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:59.967 [2024-07-11 02:39:50.144228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.967 [2024-07-11 02:39:50.144288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.967 [2024-07-11 02:39:50.144304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.967 [2024-07-11 02:39:50.144318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.967 [2024-07-11 02:39:50.144330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.967 [2024-07-11 02:39:50.144397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.967 [2024-07-11 02:39:50.147532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.967 [2024-07-11 02:39:50.147617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:59.967 [2024-07-11 02:39:50.147652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.967 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:00.227 [2024-07-11 02:39:50.424153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:00.227 Malloc1 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:00.227 [2024-07-11 02:39:50.474094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1919573 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:34:00.227 02:39:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:00.227 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.133 02:39:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:34:02.133 02:39:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.133 02:39:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:02.133 02:39:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.133 02:39:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:34:02.133 "tick_rate": 2700000000, 00:34:02.133 "poll_groups": [ 00:34:02.133 { 00:34:02.133 "name": "nvmf_tgt_poll_group_000", 00:34:02.133 "admin_qpairs": 1, 00:34:02.133 "io_qpairs": 4, 00:34:02.133 "current_admin_qpairs": 1, 00:34:02.133 "current_io_qpairs": 4, 00:34:02.133 "pending_bdev_io": 0, 00:34:02.133 "completed_nvme_io": 27561, 00:34:02.133 "transports": [ 00:34:02.133 { 00:34:02.133 "trtype": "TCP" 00:34:02.133 } 00:34:02.133 ] 00:34:02.133 }, 00:34:02.133 { 00:34:02.133 "name": "nvmf_tgt_poll_group_001", 00:34:02.133 "admin_qpairs": 0, 00:34:02.133 "io_qpairs": 0, 00:34:02.133 "current_admin_qpairs": 0, 00:34:02.133 "current_io_qpairs": 0, 00:34:02.133 "pending_bdev_io": 0, 00:34:02.133 "completed_nvme_io": 0, 00:34:02.133 "transports": [ 00:34:02.133 { 00:34:02.133 "trtype": "TCP" 00:34:02.133 } 00:34:02.133 ] 00:34:02.133 }, 00:34:02.133 { 00:34:02.133 "name": "nvmf_tgt_poll_group_002", 00:34:02.133 "admin_qpairs": 0, 00:34:02.133 "io_qpairs": 0, 00:34:02.133 "current_admin_qpairs": 0, 00:34:02.133 "current_io_qpairs": 0, 00:34:02.133 "pending_bdev_io": 0, 00:34:02.133 "completed_nvme_io": 0, 00:34:02.133 "transports": [ 00:34:02.133 { 00:34:02.133 "trtype": "TCP" 00:34:02.133 } 00:34:02.133 ] 00:34:02.133 }, 00:34:02.133 { 00:34:02.133 "name": "nvmf_tgt_poll_group_003", 00:34:02.133 "admin_qpairs": 0, 00:34:02.133 "io_qpairs": 0, 00:34:02.133 "current_admin_qpairs": 0, 00:34:02.133 "current_io_qpairs": 0, 00:34:02.134 "pending_bdev_io": 0, 00:34:02.134 "completed_nvme_io": 0, 00:34:02.134 "transports": [ 00:34:02.134 { 00:34:02.134 "trtype": "TCP" 00:34:02.134 } 00:34:02.134 ] 00:34:02.134 } 00:34:02.134 ] 00:34:02.134 }' 00:34:02.134 02:39:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:34:02.134 02:39:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:34:02.134 02:39:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:34:02.134 02:39:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:34:02.134 02:39:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1919573 00:34:10.256 Initializing NVMe Controllers 00:34:10.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:10.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:34:10.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:34:10.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:34:10.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:34:10.257 Initialization complete. Launching workers. 00:34:10.257 ======================================================== 00:34:10.257 Latency(us) 00:34:10.257 Device Information : IOPS MiB/s Average min max 00:34:10.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3517.40 13.74 18205.04 2441.41 71443.89 00:34:10.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3980.30 15.55 16152.67 2226.46 67559.48 00:34:10.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3908.20 15.27 16409.97 2312.23 67134.80 00:34:10.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3569.40 13.94 17940.18 2070.40 65696.22 00:34:10.257 ======================================================== 00:34:10.257 Total : 14975.29 58.50 17127.94 2070.40 71443.89 00:34:10.257 00:34:10.257 02:40:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:34:10.257 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:10.257 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:34:10.257 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:10.257 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:34:10.257 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:10.257 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:10.257 rmmod nvme_tcp 00:34:10.514 rmmod nvme_fabrics 00:34:10.514 rmmod nvme_keyring 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1919465 ']' 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1919465 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1919465 ']' 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1919465 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1919465 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1919465' 00:34:10.514 killing process with pid 1919465 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1919465 00:34:10.514 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1919465 00:34:10.773 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:10.773 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:10.773 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:10.773 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:10.773 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:10.773 02:40:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.773 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:10.773 02:40:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.062 02:40:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:14.062 02:40:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:34:14.062 00:34:14.062 real 0m43.884s 00:34:14.062 user 2m38.093s 00:34:14.062 sys 0m9.655s 00:34:14.062 02:40:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:14.062 02:40:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:34:14.062 ************************************ 00:34:14.062 END TEST nvmf_perf_adq 00:34:14.062 ************************************ 00:34:14.062 02:40:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:14.062 02:40:04 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:34:14.062 02:40:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:14.062 02:40:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.062 02:40:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.062 ************************************ 00:34:14.062 START TEST nvmf_shutdown 00:34:14.062 ************************************ 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:34:14.062 * Looking for test storage... 00:34:14.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.062 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:14.063 ************************************ 00:34:14.063 START TEST nvmf_shutdown_tc1 00:34:14.063 ************************************ 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:34:14.063 02:40:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:15.443 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:15.443 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:15.443 Found net devices under 0000:08:00.0: cvl_0_0 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:15.443 Found net devices under 0000:08:00.1: cvl_0_1 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:15.443 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.444 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:15.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:34:15.702 00:34:15.702 --- 10.0.0.2 ping statistics --- 00:34:15.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.702 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:34:15.702 00:34:15.702 --- 10.0.0.1 ping statistics --- 00:34:15.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.702 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1922094 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1922094 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1922094 ']' 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:15.702 02:40:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:15.702 [2024-07-11 02:40:06.010090] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:15.702 [2024-07-11 02:40:06.010180] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.702 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.702 [2024-07-11 02:40:06.078424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:15.961 [2024-07-11 02:40:06.166539] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.961 [2024-07-11 02:40:06.166600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.961 [2024-07-11 02:40:06.166616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.961 [2024-07-11 02:40:06.166630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.961 [2024-07-11 02:40:06.166644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.961 [2024-07-11 02:40:06.166720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.961 [2024-07-11 02:40:06.166800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:15.961 [2024-07-11 02:40:06.166883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:15.961 [2024-07-11 02:40:06.166887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:15.961 [2024-07-11 02:40:06.313273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.961 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:15.961 Malloc1 00:34:16.219 [2024-07-11 02:40:06.399812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.219 Malloc2 00:34:16.219 Malloc3 00:34:16.219 Malloc4 00:34:16.219 Malloc5 00:34:16.219 Malloc6 00:34:16.477 Malloc7 00:34:16.477 Malloc8 00:34:16.477 Malloc9 00:34:16.477 Malloc10 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1922156 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1922156 /var/tmp/bdevperf.sock 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1922156 ']' 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:16.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.477 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.477 { 00:34:16.477 "params": { 00:34:16.477 "name": "Nvme$subsystem", 00:34:16.477 "trtype": "$TEST_TRANSPORT", 00:34:16.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.477 "adrfam": "ipv4", 00:34:16.477 "trsvcid": "$NVMF_PORT", 00:34:16.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.477 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.478 { 00:34:16.478 "params": { 00:34:16.478 "name": "Nvme$subsystem", 00:34:16.478 "trtype": "$TEST_TRANSPORT", 00:34:16.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.478 "adrfam": "ipv4", 00:34:16.478 "trsvcid": "$NVMF_PORT", 00:34:16.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.478 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.478 { 00:34:16.478 "params": { 00:34:16.478 "name": "Nvme$subsystem", 00:34:16.478 "trtype": "$TEST_TRANSPORT", 00:34:16.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.478 "adrfam": "ipv4", 00:34:16.478 "trsvcid": "$NVMF_PORT", 00:34:16.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.478 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.478 { 00:34:16.478 "params": { 00:34:16.478 "name": "Nvme$subsystem", 00:34:16.478 "trtype": "$TEST_TRANSPORT", 00:34:16.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.478 "adrfam": "ipv4", 00:34:16.478 "trsvcid": "$NVMF_PORT", 00:34:16.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.478 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.478 { 00:34:16.478 "params": { 00:34:16.478 "name": "Nvme$subsystem", 00:34:16.478 "trtype": "$TEST_TRANSPORT", 00:34:16.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.478 "adrfam": "ipv4", 00:34:16.478 "trsvcid": "$NVMF_PORT", 00:34:16.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.478 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.478 { 00:34:16.478 "params": { 00:34:16.478 "name": "Nvme$subsystem", 00:34:16.478 "trtype": "$TEST_TRANSPORT", 00:34:16.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.478 "adrfam": "ipv4", 00:34:16.478 "trsvcid": "$NVMF_PORT", 00:34:16.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.478 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.478 { 00:34:16.478 "params": { 00:34:16.478 "name": "Nvme$subsystem", 00:34:16.478 "trtype": "$TEST_TRANSPORT", 00:34:16.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.478 "adrfam": "ipv4", 00:34:16.478 "trsvcid": "$NVMF_PORT", 00:34:16.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.478 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.478 { 00:34:16.478 "params": { 00:34:16.478 "name": "Nvme$subsystem", 00:34:16.478 "trtype": "$TEST_TRANSPORT", 00:34:16.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.478 "adrfam": "ipv4", 00:34:16.478 "trsvcid": "$NVMF_PORT", 00:34:16.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.478 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.478 { 00:34:16.478 "params": { 00:34:16.478 "name": "Nvme$subsystem", 00:34:16.478 "trtype": "$TEST_TRANSPORT", 00:34:16.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.478 "adrfam": "ipv4", 00:34:16.478 "trsvcid": "$NVMF_PORT", 00:34:16.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.478 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:16.478 { 00:34:16.478 "params": { 00:34:16.478 "name": "Nvme$subsystem", 00:34:16.478 "trtype": "$TEST_TRANSPORT", 00:34:16.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.478 "adrfam": "ipv4", 00:34:16.478 "trsvcid": "$NVMF_PORT", 00:34:16.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.478 "hdgst": ${hdgst:-false}, 00:34:16.478 "ddgst": ${ddgst:-false} 00:34:16.478 }, 00:34:16.478 "method": "bdev_nvme_attach_controller" 00:34:16.478 } 00:34:16.478 EOF 00:34:16.478 )") 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:34:16.478 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:34:16.479 02:40:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme1", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 },{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme2", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 },{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme3", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 },{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme4", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 },{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme5", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 },{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme6", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 },{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme7", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 },{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme8", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 },{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme9", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 },{ 00:34:16.479 "params": { 00:34:16.479 "name": "Nvme10", 00:34:16.479 "trtype": "tcp", 00:34:16.479 "traddr": "10.0.0.2", 00:34:16.479 "adrfam": "ipv4", 00:34:16.479 "trsvcid": "4420", 00:34:16.479 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:16.479 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:16.479 "hdgst": false, 00:34:16.479 "ddgst": false 00:34:16.479 }, 00:34:16.479 "method": "bdev_nvme_attach_controller" 00:34:16.479 }' 00:34:16.479 [2024-07-11 02:40:06.888903] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:16.479 [2024-07-11 02:40:06.888988] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:16.737 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.737 [2024-07-11 02:40:06.951056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.737 [2024-07-11 02:40:07.038759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.638 02:40:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:18.638 02:40:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:34:18.638 02:40:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:18.638 02:40:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.638 02:40:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:18.638 02:40:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.638 02:40:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1922156 00:34:18.638 02:40:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:34:18.638 02:40:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:34:19.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1922156 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1922094 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.570 { 00:34:19.570 "params": { 00:34:19.570 "name": "Nvme$subsystem", 00:34:19.570 "trtype": "$TEST_TRANSPORT", 00:34:19.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.570 "adrfam": "ipv4", 00:34:19.570 "trsvcid": "$NVMF_PORT", 00:34:19.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.570 "hdgst": ${hdgst:-false}, 00:34:19.570 "ddgst": ${ddgst:-false} 00:34:19.570 }, 00:34:19.570 "method": "bdev_nvme_attach_controller" 00:34:19.570 } 00:34:19.570 EOF 00:34:19.570 )") 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.570 { 00:34:19.570 "params": { 00:34:19.570 "name": "Nvme$subsystem", 00:34:19.570 "trtype": "$TEST_TRANSPORT", 00:34:19.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.570 "adrfam": "ipv4", 00:34:19.570 "trsvcid": "$NVMF_PORT", 00:34:19.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.570 "hdgst": ${hdgst:-false}, 00:34:19.570 "ddgst": ${ddgst:-false} 00:34:19.570 }, 00:34:19.570 "method": "bdev_nvme_attach_controller" 00:34:19.570 } 00:34:19.570 EOF 00:34:19.570 )") 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.570 { 00:34:19.570 "params": { 00:34:19.570 "name": "Nvme$subsystem", 00:34:19.570 "trtype": "$TEST_TRANSPORT", 00:34:19.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.570 "adrfam": "ipv4", 00:34:19.570 "trsvcid": "$NVMF_PORT", 00:34:19.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.570 "hdgst": ${hdgst:-false}, 00:34:19.570 "ddgst": ${ddgst:-false} 00:34:19.570 }, 00:34:19.570 "method": "bdev_nvme_attach_controller" 00:34:19.570 } 00:34:19.570 EOF 00:34:19.570 )") 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.570 { 00:34:19.570 "params": { 00:34:19.570 "name": "Nvme$subsystem", 00:34:19.570 "trtype": "$TEST_TRANSPORT", 00:34:19.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.570 "adrfam": "ipv4", 00:34:19.570 "trsvcid": "$NVMF_PORT", 00:34:19.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.570 "hdgst": ${hdgst:-false}, 00:34:19.570 "ddgst": ${ddgst:-false} 00:34:19.570 }, 00:34:19.570 "method": "bdev_nvme_attach_controller" 00:34:19.570 } 00:34:19.570 EOF 00:34:19.570 )") 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.570 { 00:34:19.570 "params": { 00:34:19.570 "name": "Nvme$subsystem", 00:34:19.570 "trtype": "$TEST_TRANSPORT", 00:34:19.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.570 "adrfam": "ipv4", 00:34:19.570 "trsvcid": "$NVMF_PORT", 00:34:19.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.570 "hdgst": ${hdgst:-false}, 00:34:19.570 "ddgst": ${ddgst:-false} 00:34:19.570 }, 00:34:19.570 "method": "bdev_nvme_attach_controller" 00:34:19.570 } 00:34:19.570 EOF 00:34:19.570 )") 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.570 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.570 { 00:34:19.570 "params": { 00:34:19.570 "name": "Nvme$subsystem", 00:34:19.570 "trtype": "$TEST_TRANSPORT", 00:34:19.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.570 "adrfam": "ipv4", 00:34:19.570 "trsvcid": "$NVMF_PORT", 00:34:19.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.571 "hdgst": ${hdgst:-false}, 00:34:19.571 "ddgst": ${ddgst:-false} 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 } 00:34:19.571 EOF 00:34:19.571 )") 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.571 { 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme$subsystem", 00:34:19.571 "trtype": "$TEST_TRANSPORT", 00:34:19.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "$NVMF_PORT", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.571 "hdgst": ${hdgst:-false}, 00:34:19.571 "ddgst": ${ddgst:-false} 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 } 00:34:19.571 EOF 00:34:19.571 )") 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.571 { 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme$subsystem", 00:34:19.571 "trtype": "$TEST_TRANSPORT", 00:34:19.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "$NVMF_PORT", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.571 "hdgst": ${hdgst:-false}, 00:34:19.571 "ddgst": ${ddgst:-false} 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 } 00:34:19.571 EOF 00:34:19.571 )") 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.571 { 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme$subsystem", 00:34:19.571 "trtype": "$TEST_TRANSPORT", 00:34:19.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "$NVMF_PORT", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.571 "hdgst": ${hdgst:-false}, 00:34:19.571 "ddgst": ${ddgst:-false} 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 } 00:34:19.571 EOF 00:34:19.571 )") 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:19.571 { 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme$subsystem", 00:34:19.571 "trtype": "$TEST_TRANSPORT", 00:34:19.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "$NVMF_PORT", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.571 "hdgst": ${hdgst:-false}, 00:34:19.571 "ddgst": ${ddgst:-false} 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 } 00:34:19.571 EOF 00:34:19.571 )") 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:34:19.571 02:40:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme1", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 },{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme2", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 },{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme3", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 },{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme4", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 },{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme5", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 },{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme6", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 },{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme7", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 },{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme8", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 },{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme9", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 },{ 00:34:19.571 "params": { 00:34:19.571 "name": "Nvme10", 00:34:19.571 "trtype": "tcp", 00:34:19.571 "traddr": "10.0.0.2", 00:34:19.571 "adrfam": "ipv4", 00:34:19.571 "trsvcid": "4420", 00:34:19.571 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:19.571 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:19.571 "hdgst": false, 00:34:19.571 "ddgst": false 00:34:19.571 }, 00:34:19.571 "method": "bdev_nvme_attach_controller" 00:34:19.571 }' 00:34:19.829 [2024-07-11 02:40:09.996951] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:19.829 [2024-07-11 02:40:09.997055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922477 ] 00:34:19.829 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.829 [2024-07-11 02:40:10.063457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.829 [2024-07-11 02:40:10.153231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.202 Running I/O for 1 seconds... 00:34:22.577 00:34:22.577 Latency(us) 00:34:22.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.577 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme1n1 : 1.21 211.61 13.23 0.00 0.00 298624.19 23107.51 296708.17 00:34:22.577 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme2n1 : 1.17 164.40 10.27 0.00 0.00 377419.60 40583.77 304475.40 00:34:22.577 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme3n1 : 1.22 210.21 13.14 0.00 0.00 289613.94 20388.98 299815.06 00:34:22.577 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme4n1 : 1.23 208.70 13.04 0.00 0.00 285649.73 20680.25 302921.96 00:34:22.577 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme5n1 : 1.24 206.75 12.92 0.00 0.00 283356.35 23398.78 306028.85 00:34:22.577 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme6n1 : 1.24 207.22 12.95 0.00 0.00 276888.65 21165.70 296708.17 00:34:22.577 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme7n1 : 1.23 212.19 13.26 0.00 0.00 263868.33 3640.89 292047.83 00:34:22.577 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme8n1 : 1.24 205.76 12.86 0.00 0.00 267746.23 18544.26 302921.96 00:34:22.577 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme9n1 : 1.25 205.14 12.82 0.00 0.00 263200.43 23204.60 299815.06 00:34:22.577 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.577 Verification LBA range: start 0x0 length 0x400 00:34:22.577 Nvme10n1 : 1.21 158.99 9.94 0.00 0.00 330295.37 27379.48 330883.98 00:34:22.577 =================================================================================================================== 00:34:22.577 Total : 1990.96 124.44 0.00 0.00 290443.69 3640.89 330883.98 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:22.835 rmmod nvme_tcp 00:34:22.835 rmmod nvme_fabrics 00:34:22.835 rmmod nvme_keyring 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1922094 ']' 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1922094 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1922094 ']' 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1922094 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1922094 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1922094' 00:34:22.835 killing process with pid 1922094 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1922094 00:34:22.835 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1922094 00:34:23.096 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:23.096 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:23.096 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:23.096 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:23.096 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:23.096 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.096 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:23.096 02:40:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:25.660 00:34:25.660 real 0m11.380s 00:34:25.660 user 0m34.046s 00:34:25.660 sys 0m2.894s 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:25.660 ************************************ 00:34:25.660 END TEST nvmf_shutdown_tc1 00:34:25.660 ************************************ 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:25.660 ************************************ 00:34:25.660 START TEST nvmf_shutdown_tc2 00:34:25.660 ************************************ 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:25.660 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:25.661 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:25.661 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:25.661 Found net devices under 0000:08:00.0: cvl_0_0 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:25.661 Found net devices under 0000:08:00.1: cvl_0_1 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:25.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:34:25.661 00:34:25.661 --- 10.0.0.2 ping statistics --- 00:34:25.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.661 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:34:25.661 00:34:25.661 --- 10.0.0.1 ping statistics --- 00:34:25.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.661 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1923155 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1923155 00:34:25.661 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1923155 ']' 00:34:25.662 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.662 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:25.662 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.662 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:25.662 02:40:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.662 [2024-07-11 02:40:15.819922] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:25.662 [2024-07-11 02:40:15.820015] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.662 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.662 [2024-07-11 02:40:15.884142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:25.662 [2024-07-11 02:40:15.971945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.662 [2024-07-11 02:40:15.972002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.662 [2024-07-11 02:40:15.972019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.662 [2024-07-11 02:40:15.972033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.662 [2024-07-11 02:40:15.972045] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.662 [2024-07-11 02:40:15.972130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:25.662 [2024-07-11 02:40:15.972184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:25.662 [2024-07-11 02:40:15.972232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:25.662 [2024-07-11 02:40:15.972235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.923 [2024-07-11 02:40:16.120316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:25.923 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:34:25.924 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.924 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.924 Malloc1 00:34:25.924 [2024-07-11 02:40:16.206718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:25.924 Malloc2 00:34:25.924 Malloc3 00:34:25.924 Malloc4 00:34:26.182 Malloc5 00:34:26.182 Malloc6 00:34:26.182 Malloc7 00:34:26.182 Malloc8 00:34:26.182 Malloc9 00:34:26.441 Malloc10 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1923224 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1923224 /var/tmp/bdevperf.sock 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1923224 ']' 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:26.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.441 { 00:34:26.441 "params": { 00:34:26.441 "name": "Nvme$subsystem", 00:34:26.441 "trtype": "$TEST_TRANSPORT", 00:34:26.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.441 "adrfam": "ipv4", 00:34:26.441 "trsvcid": "$NVMF_PORT", 00:34:26.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.441 "hdgst": ${hdgst:-false}, 00:34:26.441 "ddgst": ${ddgst:-false} 00:34:26.441 }, 00:34:26.441 "method": "bdev_nvme_attach_controller" 00:34:26.441 } 00:34:26.441 EOF 00:34:26.441 )") 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.441 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.441 { 00:34:26.441 "params": { 00:34:26.441 "name": "Nvme$subsystem", 00:34:26.441 "trtype": "$TEST_TRANSPORT", 00:34:26.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.441 "adrfam": "ipv4", 00:34:26.441 "trsvcid": "$NVMF_PORT", 00:34:26.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.441 "hdgst": ${hdgst:-false}, 00:34:26.441 "ddgst": ${ddgst:-false} 00:34:26.441 }, 00:34:26.441 "method": "bdev_nvme_attach_controller" 00:34:26.441 } 00:34:26.441 EOF 00:34:26.441 )") 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.442 { 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme$subsystem", 00:34:26.442 "trtype": "$TEST_TRANSPORT", 00:34:26.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "$NVMF_PORT", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.442 "hdgst": ${hdgst:-false}, 00:34:26.442 "ddgst": ${ddgst:-false} 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 } 00:34:26.442 EOF 00:34:26.442 )") 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.442 { 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme$subsystem", 00:34:26.442 "trtype": "$TEST_TRANSPORT", 00:34:26.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "$NVMF_PORT", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.442 "hdgst": ${hdgst:-false}, 00:34:26.442 "ddgst": ${ddgst:-false} 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 } 00:34:26.442 EOF 00:34:26.442 )") 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.442 { 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme$subsystem", 00:34:26.442 "trtype": "$TEST_TRANSPORT", 00:34:26.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "$NVMF_PORT", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.442 "hdgst": ${hdgst:-false}, 00:34:26.442 "ddgst": ${ddgst:-false} 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 } 00:34:26.442 EOF 00:34:26.442 )") 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.442 { 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme$subsystem", 00:34:26.442 "trtype": "$TEST_TRANSPORT", 00:34:26.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "$NVMF_PORT", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.442 "hdgst": ${hdgst:-false}, 00:34:26.442 "ddgst": ${ddgst:-false} 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 } 00:34:26.442 EOF 00:34:26.442 )") 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.442 { 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme$subsystem", 00:34:26.442 "trtype": "$TEST_TRANSPORT", 00:34:26.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "$NVMF_PORT", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.442 "hdgst": ${hdgst:-false}, 00:34:26.442 "ddgst": ${ddgst:-false} 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 } 00:34:26.442 EOF 00:34:26.442 )") 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.442 { 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme$subsystem", 00:34:26.442 "trtype": "$TEST_TRANSPORT", 00:34:26.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "$NVMF_PORT", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.442 "hdgst": ${hdgst:-false}, 00:34:26.442 "ddgst": ${ddgst:-false} 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 } 00:34:26.442 EOF 00:34:26.442 )") 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.442 { 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme$subsystem", 00:34:26.442 "trtype": "$TEST_TRANSPORT", 00:34:26.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "$NVMF_PORT", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.442 "hdgst": ${hdgst:-false}, 00:34:26.442 "ddgst": ${ddgst:-false} 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 } 00:34:26.442 EOF 00:34:26.442 )") 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:26.442 { 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme$subsystem", 00:34:26.442 "trtype": "$TEST_TRANSPORT", 00:34:26.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "$NVMF_PORT", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:26.442 "hdgst": ${hdgst:-false}, 00:34:26.442 "ddgst": ${ddgst:-false} 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 } 00:34:26.442 EOF 00:34:26.442 )") 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:34:26.442 02:40:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme1", 00:34:26.442 "trtype": "tcp", 00:34:26.442 "traddr": "10.0.0.2", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "4420", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:26.442 "hdgst": false, 00:34:26.442 "ddgst": false 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 },{ 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme2", 00:34:26.442 "trtype": "tcp", 00:34:26.442 "traddr": "10.0.0.2", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "4420", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:26.442 "hdgst": false, 00:34:26.442 "ddgst": false 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 },{ 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme3", 00:34:26.442 "trtype": "tcp", 00:34:26.442 "traddr": "10.0.0.2", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "4420", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:26.442 "hdgst": false, 00:34:26.442 "ddgst": false 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 },{ 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme4", 00:34:26.442 "trtype": "tcp", 00:34:26.442 "traddr": "10.0.0.2", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "4420", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:26.442 "hdgst": false, 00:34:26.442 "ddgst": false 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 },{ 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme5", 00:34:26.442 "trtype": "tcp", 00:34:26.442 "traddr": "10.0.0.2", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "4420", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:26.442 "hdgst": false, 00:34:26.442 "ddgst": false 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 },{ 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme6", 00:34:26.442 "trtype": "tcp", 00:34:26.442 "traddr": "10.0.0.2", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "4420", 00:34:26.442 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:26.442 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:26.442 "hdgst": false, 00:34:26.442 "ddgst": false 00:34:26.442 }, 00:34:26.442 "method": "bdev_nvme_attach_controller" 00:34:26.442 },{ 00:34:26.442 "params": { 00:34:26.442 "name": "Nvme7", 00:34:26.442 "trtype": "tcp", 00:34:26.442 "traddr": "10.0.0.2", 00:34:26.442 "adrfam": "ipv4", 00:34:26.442 "trsvcid": "4420", 00:34:26.443 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:26.443 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:26.443 "hdgst": false, 00:34:26.443 "ddgst": false 00:34:26.443 }, 00:34:26.443 "method": "bdev_nvme_attach_controller" 00:34:26.443 },{ 00:34:26.443 "params": { 00:34:26.443 "name": "Nvme8", 00:34:26.443 "trtype": "tcp", 00:34:26.443 "traddr": "10.0.0.2", 00:34:26.443 "adrfam": "ipv4", 00:34:26.443 "trsvcid": "4420", 00:34:26.443 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:26.443 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:26.443 "hdgst": false, 00:34:26.443 "ddgst": false 00:34:26.443 }, 00:34:26.443 "method": "bdev_nvme_attach_controller" 00:34:26.443 },{ 00:34:26.443 "params": { 00:34:26.443 "name": "Nvme9", 00:34:26.443 "trtype": "tcp", 00:34:26.443 "traddr": "10.0.0.2", 00:34:26.443 "adrfam": "ipv4", 00:34:26.443 "trsvcid": "4420", 00:34:26.443 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:26.443 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:26.443 "hdgst": false, 00:34:26.443 "ddgst": false 00:34:26.443 }, 00:34:26.443 "method": "bdev_nvme_attach_controller" 00:34:26.443 },{ 00:34:26.443 "params": { 00:34:26.443 "name": "Nvme10", 00:34:26.443 "trtype": "tcp", 00:34:26.443 "traddr": "10.0.0.2", 00:34:26.443 "adrfam": "ipv4", 00:34:26.443 "trsvcid": "4420", 00:34:26.443 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:26.443 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:26.443 "hdgst": false, 00:34:26.443 "ddgst": false 00:34:26.443 }, 00:34:26.443 "method": "bdev_nvme_attach_controller" 00:34:26.443 }' 00:34:26.443 [2024-07-11 02:40:16.705795] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:26.443 [2024-07-11 02:40:16.705894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923224 ] 00:34:26.443 EAL: No free 2048 kB hugepages reported on node 1 00:34:26.443 [2024-07-11 02:40:16.768482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.443 [2024-07-11 02:40:16.855932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.345 Running I/O for 10 seconds... 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:34:28.604 02:40:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:34:28.863 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:34:28.863 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:34:28.863 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:28.863 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:34:28.863 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1923224 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1923224 ']' 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1923224 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1923224 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1923224' 00:34:28.864 killing process with pid 1923224 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1923224 00:34:28.864 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1923224 00:34:28.864 Received shutdown signal, test time was about 0.981430 seconds 00:34:28.864 00:34:28.864 Latency(us) 00:34:28.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.864 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme1n1 : 0.97 197.67 12.35 0.00 0.00 319368.15 25049.32 321563.31 00:34:28.864 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme2n1 : 0.98 195.90 12.24 0.00 0.00 314632.47 34952.53 320009.86 00:34:28.864 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme3n1 : 0.95 202.21 12.64 0.00 0.00 296456.34 20874.43 315349.52 00:34:28.864 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme4n1 : 0.97 203.46 12.72 0.00 0.00 286570.54 3859.34 259425.47 00:34:28.864 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme5n1 : 0.96 200.95 12.56 0.00 0.00 283719.81 46603.38 293601.28 00:34:28.864 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme6n1 : 0.96 200.29 12.52 0.00 0.00 277206.16 25243.50 315349.52 00:34:28.864 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme7n1 : 0.97 198.72 12.42 0.00 0.00 272312.19 26796.94 290494.39 00:34:28.864 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme8n1 : 0.98 196.48 12.28 0.00 0.00 268504.18 19126.80 318456.41 00:34:28.864 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme9n1 : 0.93 138.08 8.63 0.00 0.00 366909.44 23398.78 330883.98 00:34:28.864 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:28.864 Verification LBA range: start 0x0 length 0x400 00:34:28.864 Nvme10n1 : 0.94 136.16 8.51 0.00 0.00 362120.15 22427.88 349525.33 00:34:28.864 =================================================================================================================== 00:34:28.864 Total : 1869.91 116.87 0.00 0.00 300474.37 3859.34 349525.33 00:34:29.122 02:40:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1923155 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:30.056 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:30.056 rmmod nvme_tcp 00:34:30.314 rmmod nvme_fabrics 00:34:30.314 rmmod nvme_keyring 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1923155 ']' 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1923155 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1923155 ']' 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1923155 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1923155 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1923155' 00:34:30.314 killing process with pid 1923155 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1923155 00:34:30.314 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1923155 00:34:30.574 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:30.574 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:30.574 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:30.574 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:30.574 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:30.574 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.574 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:30.574 02:40:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:33.111 00:34:33.111 real 0m7.355s 00:34:33.111 user 0m21.973s 00:34:33.111 sys 0m1.410s 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:33.111 ************************************ 00:34:33.111 END TEST nvmf_shutdown_tc2 00:34:33.111 ************************************ 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:33.111 ************************************ 00:34:33.111 START TEST nvmf_shutdown_tc3 00:34:33.111 ************************************ 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:33.111 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:33.112 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:33.112 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:33.112 Found net devices under 0000:08:00.0: cvl_0_0 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:33.112 Found net devices under 0000:08:00.1: cvl_0_1 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:33.112 02:40:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:33.112 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:33.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:34:33.113 00:34:33.113 --- 10.0.0.2 ping statistics --- 00:34:33.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.113 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:34:33.113 00:34:33.113 --- 10.0.0.1 ping statistics --- 00:34:33.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.113 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1923933 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1923933 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1923933 ']' 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:33.113 [2024-07-11 02:40:23.199778] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:33.113 [2024-07-11 02:40:23.199883] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:33.113 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.113 [2024-07-11 02:40:23.265301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:33.113 [2024-07-11 02:40:23.353062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:33.113 [2024-07-11 02:40:23.353114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:33.113 [2024-07-11 02:40:23.353131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:33.113 [2024-07-11 02:40:23.353145] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:33.113 [2024-07-11 02:40:23.353156] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:33.113 [2024-07-11 02:40:23.353241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:33.113 [2024-07-11 02:40:23.353294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:33.113 [2024-07-11 02:40:23.353350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:33.113 [2024-07-11 02:40:23.353353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:33.113 [2024-07-11 02:40:23.490124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.113 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:33.370 Malloc1 00:34:33.370 [2024-07-11 02:40:23.563521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.370 Malloc2 00:34:33.370 Malloc3 00:34:33.370 Malloc4 00:34:33.370 Malloc5 00:34:33.370 Malloc6 00:34:33.628 Malloc7 00:34:33.628 Malloc8 00:34:33.628 Malloc9 00:34:33.628 Malloc10 00:34:33.628 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.628 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:34:33.628 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:33.628 02:40:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1924074 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1924074 /var/tmp/bdevperf.sock 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1924074 ']' 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:34:33.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:33.628 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:33.629 { 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme$subsystem", 00:34:33.629 "trtype": "$TEST_TRANSPORT", 00:34:33.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "$NVMF_PORT", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.629 "hdgst": ${hdgst:-false}, 00:34:33.629 "ddgst": ${ddgst:-false} 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 } 00:34:33.629 EOF 00:34:33.629 )") 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:34:33.629 02:40:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme1", 00:34:33.629 "trtype": "tcp", 00:34:33.629 "traddr": "10.0.0.2", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "4420", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:33.629 "hdgst": false, 00:34:33.629 "ddgst": false 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 },{ 00:34:33.629 "params": { 00:34:33.629 "name": "Nvme2", 00:34:33.629 "trtype": "tcp", 00:34:33.629 "traddr": "10.0.0.2", 00:34:33.629 "adrfam": "ipv4", 00:34:33.629 "trsvcid": "4420", 00:34:33.629 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:33.629 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:33.629 "hdgst": false, 00:34:33.629 "ddgst": false 00:34:33.629 }, 00:34:33.629 "method": "bdev_nvme_attach_controller" 00:34:33.629 },{ 00:34:33.629 "params": { 00:34:33.630 "name": "Nvme3", 00:34:33.630 "trtype": "tcp", 00:34:33.630 "traddr": "10.0.0.2", 00:34:33.630 "adrfam": "ipv4", 00:34:33.630 "trsvcid": "4420", 00:34:33.630 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:33.630 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:33.630 "hdgst": false, 00:34:33.630 "ddgst": false 00:34:33.630 }, 00:34:33.630 "method": "bdev_nvme_attach_controller" 00:34:33.630 },{ 00:34:33.630 "params": { 00:34:33.630 "name": "Nvme4", 00:34:33.630 "trtype": "tcp", 00:34:33.630 "traddr": "10.0.0.2", 00:34:33.630 "adrfam": "ipv4", 00:34:33.630 "trsvcid": "4420", 00:34:33.630 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:33.630 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:33.630 "hdgst": false, 00:34:33.630 "ddgst": false 00:34:33.630 }, 00:34:33.630 "method": "bdev_nvme_attach_controller" 00:34:33.630 },{ 00:34:33.630 "params": { 00:34:33.630 "name": "Nvme5", 00:34:33.630 "trtype": "tcp", 00:34:33.630 "traddr": "10.0.0.2", 00:34:33.630 "adrfam": "ipv4", 00:34:33.630 "trsvcid": "4420", 00:34:33.630 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:33.630 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:33.630 "hdgst": false, 00:34:33.630 "ddgst": false 00:34:33.630 }, 00:34:33.630 "method": "bdev_nvme_attach_controller" 00:34:33.630 },{ 00:34:33.630 "params": { 00:34:33.630 "name": "Nvme6", 00:34:33.630 "trtype": "tcp", 00:34:33.630 "traddr": "10.0.0.2", 00:34:33.630 "adrfam": "ipv4", 00:34:33.630 "trsvcid": "4420", 00:34:33.630 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:33.630 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:33.630 "hdgst": false, 00:34:33.630 "ddgst": false 00:34:33.630 }, 00:34:33.630 "method": "bdev_nvme_attach_controller" 00:34:33.630 },{ 00:34:33.630 "params": { 00:34:33.630 "name": "Nvme7", 00:34:33.630 "trtype": "tcp", 00:34:33.630 "traddr": "10.0.0.2", 00:34:33.630 "adrfam": "ipv4", 00:34:33.630 "trsvcid": "4420", 00:34:33.630 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:33.630 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:33.630 "hdgst": false, 00:34:33.630 "ddgst": false 00:34:33.630 }, 00:34:33.630 "method": "bdev_nvme_attach_controller" 00:34:33.630 },{ 00:34:33.630 "params": { 00:34:33.630 "name": "Nvme8", 00:34:33.630 "trtype": "tcp", 00:34:33.630 "traddr": "10.0.0.2", 00:34:33.630 "adrfam": "ipv4", 00:34:33.630 "trsvcid": "4420", 00:34:33.630 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:33.630 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:33.630 "hdgst": false, 00:34:33.630 "ddgst": false 00:34:33.630 }, 00:34:33.630 "method": "bdev_nvme_attach_controller" 00:34:33.630 },{ 00:34:33.630 "params": { 00:34:33.630 "name": "Nvme9", 00:34:33.630 "trtype": "tcp", 00:34:33.630 "traddr": "10.0.0.2", 00:34:33.630 "adrfam": "ipv4", 00:34:33.630 "trsvcid": "4420", 00:34:33.630 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:33.630 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:33.630 "hdgst": false, 00:34:33.630 "ddgst": false 00:34:33.630 }, 00:34:33.630 "method": "bdev_nvme_attach_controller" 00:34:33.630 },{ 00:34:33.630 "params": { 00:34:33.630 "name": "Nvme10", 00:34:33.630 "trtype": "tcp", 00:34:33.630 "traddr": "10.0.0.2", 00:34:33.630 "adrfam": "ipv4", 00:34:33.630 "trsvcid": "4420", 00:34:33.630 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:33.630 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:33.630 "hdgst": false, 00:34:33.630 "ddgst": false 00:34:33.630 }, 00:34:33.630 "method": "bdev_nvme_attach_controller" 00:34:33.630 }' 00:34:33.909 [2024-07-11 02:40:24.053627] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:33.909 [2024-07-11 02:40:24.053715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924074 ] 00:34:33.909 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.909 [2024-07-11 02:40:24.115199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.909 [2024-07-11 02:40:24.202692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.808 Running I/O for 10 seconds... 00:34:35.808 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:35.808 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:34:35.808 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:35.808 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.808 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:35.808 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:34:35.809 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:34:36.066 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:34:36.324 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:34:36.324 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:34:36.324 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:36.324 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:34:36.324 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.324 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=135 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1923933 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1923933 ']' 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1923933 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1923933 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1923933' 00:34:36.603 killing process with pid 1923933 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1923933 00:34:36.603 02:40:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1923933 00:34:36.603 [2024-07-11 02:40:26.798307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.798992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.799475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17147a0 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.800998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.801013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.801027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.801041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.801055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.801069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.801083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.801098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.604 [2024-07-11 02:40:26.801124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.801628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66510 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.605 [2024-07-11 02:40:26.803983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.803997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.804011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.804026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.804040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.804054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.804068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1714c40 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.806985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.807344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17155a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.606 [2024-07-11 02:40:26.809984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.809998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.810584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17163a0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.811982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.607 [2024-07-11 02:40:26.812175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 02:40:26.812487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a65bd0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 the state(5) to be set 00:34:36.608 [2024-07-11 02:40:26.812507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.812967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.812983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.813001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.813017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.813035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.813055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.813073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.813089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.813108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.813124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.813142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.813158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.813177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.813193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.813211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.813226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.813235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with [2024-07-11 02:40:26.813245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128the state(5) to be set 00:34:36.608 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.608 [2024-07-11 02:40:26.813263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.608 [2024-07-11 02:40:26.813265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128[2024-07-11 02:40:26.813282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with [2024-07-11 02:40:26.813298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:34:36.609 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 02:40:26.813406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with [2024-07-11 02:40:26.813573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:12the state(5) to be set 00:34:36.609 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with [2024-07-11 02:40:26.813648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12the state(5) to be set 00:34:36.609 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12[2024-07-11 02:40:26.813719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 02:40:26.813735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:12[2024-07-11 02:40:26.813794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with [2024-07-11 02:40:26.813811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:34:36.609 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:12[2024-07-11 02:40:26.813868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with [2024-07-11 02:40:26.813882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:34:36.609 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 [2024-07-11 02:40:26.813927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 02:40:26.813955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.609 the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.609 [2024-07-11 02:40:26.813985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.609 [2024-07-11 02:40:26.813990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.813999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.814009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.814026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.814043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12[2024-07-11 02:40:26.814045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.814061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with [2024-07-11 02:40:26.814062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:34:36.610 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.814082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.814098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a66070 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.814117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.610 [2024-07-11 02:40:26.814673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.814721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.610 [2024-07-11 02:40:26.814791] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb2ce0 was disconnected and freed. reset controller. 00:34:36.610 [2024-07-11 02:40:26.815181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d930 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.815368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14320 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.815556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddffa0 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.815740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.610 [2024-07-11 02:40:26.815872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.610 [2024-07-11 02:40:26.815886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738610 is same with the state(5) to be set 00:34:36.610 [2024-07-11 02:40:26.815937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.815959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.815982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.815997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46e50 is same with the state(5) to be set 00:34:36.611 [2024-07-11 02:40:26.816120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x806a60 is same with the state(5) to be set 00:34:36.611 [2024-07-11 02:40:26.816312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3f500 is same with the state(5) to be set 00:34:36.611 [2024-07-11 02:40:26.816494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde7290 is same with the state(5) to be set 00:34:36.611 [2024-07-11 02:40:26.816682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58af0 is same with the state(5) to be set 00:34:36.611 [2024-07-11 02:40:26.816865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.816977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:36.611 [2024-07-11 02:40:26.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2d90 is same with the state(5) to be set 00:34:36.611 [2024-07-11 02:40:26.817470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.817971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.817986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.611 [2024-07-11 02:40:26.818004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.611 [2024-07-11 02:40:26.818026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.612 [2024-07-11 02:40:26.818044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.612 [2024-07-11 02:40:26.818059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.612 [2024-07-11 02:40:26.818077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.612 [2024-07-11 02:40:26.818092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.612 [2024-07-11 02:40:26.818109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.612 [2024-07-11 02:40:26.818125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.612 [2024-07-11 02:40:26.818142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.612 [2024-07-11 02:40:26.818162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.612 [2024-07-11 02:40:26.818180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.612 [2024-07-11 02:40:26.818195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.612 [2024-07-11 02:40:26.818212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.612 [2024-07-11 02:40:26.818228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.612 [2024-07-11 02:40:26.818245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.612 [2024-07-11 02:40:26.818260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.612 [2024-07-11 02:40:26.818278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.612 [2024-07-11 02:40:26.818293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.612 [2024-07-11 02:40:26.818310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.612 [2024-07-11 02:40:26.818326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.818978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.818995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.613 [2024-07-11 02:40:26.819426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.613 [2024-07-11 02:40:26.819448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.819463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.819481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.819496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.819521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.819538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.819556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.819571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.819589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.819604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.819622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.819637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.819654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.819670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.819750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:36.614 [2024-07-11 02:40:26.819829] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdaee10 was disconnected and freed. reset controller. 00:34:36.614 [2024-07-11 02:40:26.820117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.820975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.820993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.614 [2024-07-11 02:40:26.821360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.614 [2024-07-11 02:40:26.821378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.821972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.821987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.822410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.822538] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb1820 was disconnected and freed. reset controller. 00:34:36.615 [2024-07-11 02:40:26.824280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:34:36.615 [2024-07-11 02:40:26.824352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80d930 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.827628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:34:36.615 [2024-07-11 02:40:26.827700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:34:36.615 [2024-07-11 02:40:26.827732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc58af0 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.827759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc46e50 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.827820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe14320 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.827864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddffa0 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.827900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738610 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.827939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x806a60 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.827970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3f500 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.828003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde7290 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.828038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc2d90 (9): Bad file descriptor 00:34:36.615 [2024-07-11 02:40:26.829052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.615 [2024-07-11 02:40:26.829091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80d930 with addr=10.0.0.2, port=4420 00:34:36.615 [2024-07-11 02:40:26.829111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d930 is same with the state(5) to be set 00:34:36.615 [2024-07-11 02:40:26.830012] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:34:36.615 [2024-07-11 02:40:26.830107] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:34:36.615 [2024-07-11 02:40:26.830183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.830208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.830242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.830260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.830279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.830295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.830313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.830329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.830347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.830363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.615 [2024-07-11 02:40:26.830382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.615 [2024-07-11 02:40:26.830398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.830980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.830995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.616 [2024-07-11 02:40:26.831902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.616 [2024-07-11 02:40:26.831917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.831935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.831950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.831969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.831985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.832430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.832447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69480 is same with the state(5) to be set 00:34:36.617 [2024-07-11 02:40:26.832536] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd69480 was disconnected and freed. reset controller. 00:34:36.617 [2024-07-11 02:40:26.832849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.617 [2024-07-11 02:40:26.832883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc46e50 with addr=10.0.0.2, port=4420 00:34:36.617 [2024-07-11 02:40:26.832903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46e50 is same with the state(5) to be set 00:34:36.617 [2024-07-11 02:40:26.833034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.617 [2024-07-11 02:40:26.833059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc58af0 with addr=10.0.0.2, port=4420 00:34:36.617 [2024-07-11 02:40:26.833076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58af0 is same with the state(5) to be set 00:34:36.617 [2024-07-11 02:40:26.833102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80d930 (9): Bad file descriptor 00:34:36.617 [2024-07-11 02:40:26.833188] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:34:36.617 [2024-07-11 02:40:26.833284] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:34:36.617 [2024-07-11 02:40:26.834812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:34:36.617 [2024-07-11 02:40:26.834887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc46e50 (9): Bad file descriptor 00:34:36.617 [2024-07-11 02:40:26.834913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc58af0 (9): Bad file descriptor 00:34:36.617 [2024-07-11 02:40:26.834933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:34:36.617 [2024-07-11 02:40:26.834960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:34:36.617 [2024-07-11 02:40:26.834977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:34:36.617 [2024-07-11 02:40:26.835194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.617 [2024-07-11 02:40:26.835690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.617 [2024-07-11 02:40:26.835708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.835724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.835742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.835758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.835776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.835791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.835809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.835825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.835843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.835858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.835876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.835892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.835910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.835925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.835943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.835958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.835976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.835992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.836966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.836982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.837000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.837015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.837033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.837048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.837066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.618 [2024-07-11 02:40:26.837082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.618 [2024-07-11 02:40:26.837100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.837421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.837438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a980 is same with the state(5) to be set 00:34:36.619 [2024-07-11 02:40:26.837908] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd6a980 was disconnected and freed. reset controller. 00:34:36.619 [2024-07-11 02:40:26.837959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.619 [2024-07-11 02:40:26.838182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.619 [2024-07-11 02:40:26.838215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde7290 with addr=10.0.0.2, port=4420 00:34:36.619 [2024-07-11 02:40:26.838234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde7290 is same with the state(5) to be set 00:34:36.619 [2024-07-11 02:40:26.838253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:34:36.619 [2024-07-11 02:40:26.838268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:34:36.619 [2024-07-11 02:40:26.838286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:34:36.619 [2024-07-11 02:40:26.838311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:34:36.619 [2024-07-11 02:40:26.838327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:34:36.619 [2024-07-11 02:40:26.838342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:34:36.619 [2024-07-11 02:40:26.838391] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.619 [2024-07-11 02:40:26.838544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.838618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.838661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.838702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.838743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.838792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.838834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.838875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.838917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.838958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.838976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.619 [2024-07-11 02:40:26.839602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.619 [2024-07-11 02:40:26.839626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.839644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.839667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.839685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.839707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.839725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.839748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.839765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.839788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.839806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.839833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.839851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.839874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.839892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.839920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.839939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.839963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.839981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.840971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.840988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.841011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.841029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.841052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.841069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.841097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.841115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.841137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.841155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.841177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.841194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.841217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.841235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.842170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb0380 is same with the state(5) to be set 00:34:36.620 [2024-07-11 02:40:26.842265] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb0380 was disconnected and freed. reset controller. 00:34:36.620 [2024-07-11 02:40:26.842287] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.620 [2024-07-11 02:40:26.843957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.620 [2024-07-11 02:40:26.843994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.620 [2024-07-11 02:40:26.844068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde7290 (9): Bad file descriptor 00:34:36.620 [2024-07-11 02:40:26.844219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.620 [2024-07-11 02:40:26.844244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.620 [2024-07-11 02:40:26.844285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.844976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.844992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.621 [2024-07-11 02:40:26.845396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.621 [2024-07-11 02:40:26.845413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.845972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.845994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.846422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.846439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x809b50 is same with the state(5) to be set 00:34:36.622 [2024-07-11 02:40:26.847909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.847941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.847969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.847985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.848020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.848054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.848087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.848119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.848152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.848186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.848219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.848254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.622 [2024-07-11 02:40:26.848288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.622 [2024-07-11 02:40:26.848305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.848972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.848989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.623 [2024-07-11 02:40:26.849639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.623 [2024-07-11 02:40:26.849655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.849975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.849991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.850008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.850024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.850045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.850062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.850080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.850096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.850113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ae30 is same with the state(5) to be set 00:34:36.624 [2024-07-11 02:40:26.851609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.851966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.851992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.624 [2024-07-11 02:40:26.852530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.624 [2024-07-11 02:40:26.852549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.852974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.852992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.853835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.853852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb41e0 is same with the state(5) to be set 00:34:36.625 [2024-07-11 02:40:26.855327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.855361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.855390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.625 [2024-07-11 02:40:26.855407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.625 [2024-07-11 02:40:26.855426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.855967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.855984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.626 [2024-07-11 02:40:26.856611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.626 [2024-07-11 02:40:26.856627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.856971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.856989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:36.627 [2024-07-11 02:40:26.857567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:36.627 [2024-07-11 02:40:26.857584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc16700 is same with the state(5) to be set 00:34:36.627 [2024-07-11 02:40:26.859598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:34:36.627 [2024-07-11 02:40:26.859648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:34:36.627 [2024-07-11 02:40:26.859683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:36.627 [2024-07-11 02:40:26.859746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:34:36.627 [2024-07-11 02:40:26.859765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:34:36.627 [2024-07-11 02:40:26.859783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:34:36.627 [2024-07-11 02:40:26.859816] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.627 [2024-07-11 02:40:26.859871] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.627 [2024-07-11 02:40:26.859901] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.627 [2024-07-11 02:40:26.859925] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.627 [2024-07-11 02:40:26.859946] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.627 [2024-07-11 02:40:26.859968] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.627 [2024-07-11 02:40:26.859990] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.627 [2024-07-11 02:40:26.860669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:34:36.627 [2024-07-11 02:40:26.860696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:34:36.627 [2024-07-11 02:40:26.860726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:34:36.627 task offset: 21760 on job bdev=Nvme6n1 fails 00:34:36.627 00:34:36.627 Latency(us) 00:34:36.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:36.627 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.627 Job: Nvme1n1 ended in about 1.07 seconds with error 00:34:36.627 Verification LBA range: start 0x0 length 0x400 00:34:36.627 Nvme1n1 : 1.07 123.23 7.70 59.75 0.00 345763.58 23398.78 288940.94 00:34:36.627 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.627 Job: Nvme2n1 ended in about 1.07 seconds with error 00:34:36.627 Verification LBA range: start 0x0 length 0x400 00:34:36.627 Nvme2n1 : 1.07 119.09 7.44 59.54 0.00 346702.13 37282.70 304475.40 00:34:36.627 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.627 Job: Nvme3n1 ended in about 1.05 seconds with error 00:34:36.627 Verification LBA range: start 0x0 length 0x400 00:34:36.627 Nvme3n1 : 1.05 182.97 11.44 60.99 0.00 247984.64 7524.50 298261.62 00:34:36.627 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.627 Job: Nvme4n1 ended in about 1.07 seconds with error 00:34:36.627 Verification LBA range: start 0x0 length 0x400 00:34:36.627 Nvme4n1 : 1.07 184.87 11.55 60.06 0.00 241410.18 21165.70 299815.06 00:34:36.627 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.627 Job: Nvme5n1 ended in about 1.05 seconds with error 00:34:36.627 Verification LBA range: start 0x0 length 0x400 00:34:36.627 Nvme5n1 : 1.05 182.72 11.42 60.91 0.00 237173.57 14369.37 299815.06 00:34:36.627 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.627 Job: Nvme6n1 ended in about 1.05 seconds with error 00:34:36.627 Verification LBA range: start 0x0 length 0x400 00:34:36.627 Nvme6n1 : 1.05 122.21 7.64 61.11 0.00 307723.76 10582.85 357292.56 00:34:36.627 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.627 Job: Nvme7n1 ended in about 1.08 seconds with error 00:34:36.627 Verification LBA range: start 0x0 length 0x400 00:34:36.628 Nvme7n1 : 1.08 118.68 7.42 59.34 0.00 310674.01 18350.08 304475.40 00:34:36.628 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.628 Job: Nvme8n1 ended in about 1.08 seconds with error 00:34:36.628 Verification LBA range: start 0x0 length 0x400 00:34:36.628 Nvme8n1 : 1.08 118.27 7.39 59.13 0.00 304466.55 16699.54 313796.08 00:34:36.628 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.628 Job: Nvme9n1 ended in about 1.06 seconds with error 00:34:36.628 Verification LBA range: start 0x0 length 0x400 00:34:36.628 Nvme9n1 : 1.06 120.96 7.56 60.48 0.00 289171.78 22816.24 307582.29 00:34:36.628 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:36.628 Job: Nvme10n1 ended in about 1.07 seconds with error 00:34:36.628 Verification LBA range: start 0x0 length 0x400 00:34:36.628 Nvme10n1 : 1.07 119.92 7.49 59.96 0.00 284936.98 26991.12 313796.08 00:34:36.628 =================================================================================================================== 00:34:36.628 Total : 1392.91 87.06 601.26 0.00 287111.67 7524.50 357292.56 00:34:36.628 [2024-07-11 02:40:26.887800] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:36.628 [2024-07-11 02:40:26.887886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:34:36.628 [2024-07-11 02:40:26.887923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:34:36.628 [2024-07-11 02:40:26.888168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.888204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xddffa0 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.888237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddffa0 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.888359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.888386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80d930 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.888403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d930 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.888537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.888564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x806a60 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.888580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x806a60 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.889829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:34:36.628 [2024-07-11 02:40:26.889892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.890099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.890134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3f500 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.890154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3f500 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.890274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.890299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc2d90 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.890317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2d90 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.890406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.890431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x738610 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.890448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x738610 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.890559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.890585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe14320 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.890601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14320 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.890704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.890729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc58af0 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.890745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc58af0 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.890773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddffa0 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.890796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80d930 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.890816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x806a60 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.890885] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.628 [2024-07-11 02:40:26.890914] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.628 [2024-07-11 02:40:26.890940] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:36.628 [2024-07-11 02:40:26.891161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.891190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc46e50 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.891207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc46e50 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.891228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3f500 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.891248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc2d90 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.891267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x738610 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.891287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe14320 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.891306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc58af0 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.891324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.891338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.891354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:34:36.628 [2024-07-11 02:40:26.891383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.891399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.891414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:34:36.628 [2024-07-11 02:40:26.891432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.891447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.891462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:36.628 [2024-07-11 02:40:26.892092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:34:36.628 [2024-07-11 02:40:26.892131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.892148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.892161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.892195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc46e50 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.892221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.892236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.892250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:34:36.628 [2024-07-11 02:40:26.892269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.892285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.892300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:34:36.628 [2024-07-11 02:40:26.892318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.892338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.892352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:34:36.628 [2024-07-11 02:40:26.892371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.892386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.892400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:34:36.628 [2024-07-11 02:40:26.892419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.892434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.892448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:34:36.628 [2024-07-11 02:40:26.892494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.892524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.892540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.892553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.892566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.892699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.628 [2024-07-11 02:40:26.892728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde7290 with addr=10.0.0.2, port=4420 00:34:36.628 [2024-07-11 02:40:26.892745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde7290 is same with the state(5) to be set 00:34:36.628 [2024-07-11 02:40:26.892762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.892776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.892790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:34:36.628 [2024-07-11 02:40:26.892834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.628 [2024-07-11 02:40:26.892859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde7290 (9): Bad file descriptor 00:34:36.628 [2024-07-11 02:40:26.892903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:34:36.628 [2024-07-11 02:40:26.892921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:34:36.628 [2024-07-11 02:40:26.892937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:34:36.628 [2024-07-11 02:40:26.892982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:36.889 02:40:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:34:36.889 02:40:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:34:37.826 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1924074 00:34:37.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1924074) - No such process 00:34:37.826 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:34:37.826 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:34:37.826 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:34:37.826 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:37.827 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:37.827 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:34:37.827 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:37.827 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:34:37.827 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:37.827 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:34:37.827 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:37.827 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:37.827 rmmod nvme_tcp 00:34:37.827 rmmod nvme_fabrics 00:34:37.827 rmmod nvme_keyring 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:38.085 02:40:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.987 02:40:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:39.987 00:34:39.987 real 0m7.327s 00:34:39.987 user 0m17.960s 00:34:39.987 sys 0m1.365s 00:34:39.987 02:40:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:39.987 02:40:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:39.987 ************************************ 00:34:39.987 END TEST nvmf_shutdown_tc3 00:34:39.987 ************************************ 00:34:39.987 02:40:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:34:39.987 02:40:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:34:39.987 00:34:39.988 real 0m26.293s 00:34:39.988 user 1m14.077s 00:34:39.988 sys 0m5.818s 00:34:39.988 02:40:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:39.988 02:40:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:39.988 ************************************ 00:34:39.988 END TEST nvmf_shutdown 00:34:39.988 ************************************ 00:34:39.988 02:40:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:39.988 02:40:30 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:34:39.988 02:40:30 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:39.988 02:40:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.988 02:40:30 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:34:39.988 02:40:30 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:39.988 02:40:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.988 02:40:30 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:34:39.988 02:40:30 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:34:39.988 02:40:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:39.988 02:40:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:39.988 02:40:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.988 ************************************ 00:34:39.988 START TEST nvmf_multicontroller 00:34:39.988 ************************************ 00:34:39.988 02:40:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:34:40.247 * Looking for test storage... 00:34:40.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:34:40.247 02:40:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:41.629 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:41.629 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:41.629 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:41.887 Found net devices under 0000:08:00.0: cvl_0_0 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:41.887 Found net devices under 0000:08:00.1: cvl_0_1 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:41.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:34:41.887 00:34:41.887 --- 10.0.0.2 ping statistics --- 00:34:41.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.887 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:34:41.887 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:34:41.887 00:34:41.887 --- 10.0.0.1 ping statistics --- 00:34:41.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.887 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1925941 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1925941 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1925941 ']' 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:41.888 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:41.888 [2024-07-11 02:40:32.246919] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:41.888 [2024-07-11 02:40:32.247021] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.888 EAL: No free 2048 kB hugepages reported on node 1 00:34:42.146 [2024-07-11 02:40:32.312066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:42.146 [2024-07-11 02:40:32.399756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:42.146 [2024-07-11 02:40:32.399822] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:42.146 [2024-07-11 02:40:32.399838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:42.146 [2024-07-11 02:40:32.399852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:42.146 [2024-07-11 02:40:32.399864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:42.146 [2024-07-11 02:40:32.399961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:42.146 [2024-07-11 02:40:32.400040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:42.146 [2024-07-11 02:40:32.400044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.146 [2024-07-11 02:40:32.537075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.146 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.404 Malloc0 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.404 [2024-07-11 02:40:32.604089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.404 [2024-07-11 02:40:32.611998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:34:42.404 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.405 Malloc1 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1926051 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1926051 /var/tmp/bdevperf.sock 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1926051 ']' 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:42.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:42.405 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.662 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:42.662 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:34:42.662 02:40:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:34:42.662 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.662 02:40:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.662 NVMe0n1 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.662 1 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.662 request: 00:34:42.662 { 00:34:42.662 "name": "NVMe0", 00:34:42.662 "trtype": "tcp", 00:34:42.662 "traddr": "10.0.0.2", 00:34:42.662 "adrfam": "ipv4", 00:34:42.662 "trsvcid": "4420", 00:34:42.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:42.662 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:34:42.662 "hostaddr": "10.0.0.2", 00:34:42.662 "hostsvcid": "60000", 00:34:42.662 "prchk_reftag": false, 00:34:42.662 "prchk_guard": false, 00:34:42.662 "hdgst": false, 00:34:42.662 "ddgst": false, 00:34:42.662 "method": "bdev_nvme_attach_controller", 00:34:42.662 "req_id": 1 00:34:42.662 } 00:34:42.662 Got JSON-RPC error response 00:34:42.662 response: 00:34:42.662 { 00:34:42.662 "code": -114, 00:34:42.662 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:34:42.662 } 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.662 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.919 request: 00:34:42.919 { 00:34:42.919 "name": "NVMe0", 00:34:42.919 "trtype": "tcp", 00:34:42.919 "traddr": "10.0.0.2", 00:34:42.919 "adrfam": "ipv4", 00:34:42.919 "trsvcid": "4420", 00:34:42.919 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:42.919 "hostaddr": "10.0.0.2", 00:34:42.919 "hostsvcid": "60000", 00:34:42.919 "prchk_reftag": false, 00:34:42.919 "prchk_guard": false, 00:34:42.919 "hdgst": false, 00:34:42.919 "ddgst": false, 00:34:42.919 "method": "bdev_nvme_attach_controller", 00:34:42.919 "req_id": 1 00:34:42.919 } 00:34:42.919 Got JSON-RPC error response 00:34:42.919 response: 00:34:42.919 { 00:34:42.919 "code": -114, 00:34:42.919 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:34:42.919 } 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.919 request: 00:34:42.919 { 00:34:42.919 "name": "NVMe0", 00:34:42.919 "trtype": "tcp", 00:34:42.919 "traddr": "10.0.0.2", 00:34:42.919 "adrfam": "ipv4", 00:34:42.919 "trsvcid": "4420", 00:34:42.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:42.919 "hostaddr": "10.0.0.2", 00:34:42.919 "hostsvcid": "60000", 00:34:42.919 "prchk_reftag": false, 00:34:42.919 "prchk_guard": false, 00:34:42.919 "hdgst": false, 00:34:42.919 "ddgst": false, 00:34:42.919 "multipath": "disable", 00:34:42.919 "method": "bdev_nvme_attach_controller", 00:34:42.919 "req_id": 1 00:34:42.919 } 00:34:42.919 Got JSON-RPC error response 00:34:42.919 response: 00:34:42.919 { 00:34:42.919 "code": -114, 00:34:42.919 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:34:42.919 } 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.919 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.919 request: 00:34:42.919 { 00:34:42.919 "name": "NVMe0", 00:34:42.919 "trtype": "tcp", 00:34:42.919 "traddr": "10.0.0.2", 00:34:42.919 "adrfam": "ipv4", 00:34:42.919 "trsvcid": "4420", 00:34:42.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:42.919 "hostaddr": "10.0.0.2", 00:34:42.919 "hostsvcid": "60000", 00:34:42.920 "prchk_reftag": false, 00:34:42.920 "prchk_guard": false, 00:34:42.920 "hdgst": false, 00:34:42.920 "ddgst": false, 00:34:42.920 "multipath": "failover", 00:34:42.920 "method": "bdev_nvme_attach_controller", 00:34:42.920 "req_id": 1 00:34:42.920 } 00:34:42.920 Got JSON-RPC error response 00:34:42.920 response: 00:34:42.920 { 00:34:42.920 "code": -114, 00:34:42.920 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:34:42.920 } 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.920 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.920 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:34:42.920 02:40:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:44.291 0 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1926051 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1926051 ']' 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1926051 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1926051 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1926051' 00:34:44.291 killing process with pid 1926051 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1926051 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1926051 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:34:44.291 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:34:44.291 [2024-07-11 02:40:32.715294] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:44.291 [2024-07-11 02:40:32.715394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926051 ] 00:34:44.291 EAL: No free 2048 kB hugepages reported on node 1 00:34:44.291 [2024-07-11 02:40:32.776099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.291 [2024-07-11 02:40:32.863556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.291 [2024-07-11 02:40:33.260359] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 84581788-1a6f-4604-8579-25b6604bdd32 already exists 00:34:44.291 [2024-07-11 02:40:33.260404] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:84581788-1a6f-4604-8579-25b6604bdd32 alias for bdev NVMe1n1 00:34:44.291 [2024-07-11 02:40:33.260421] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:34:44.291 Running I/O for 1 seconds... 00:34:44.291 00:34:44.291 Latency(us) 00:34:44.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.291 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:34:44.291 NVMe0n1 : 1.00 16595.77 64.83 0.00 0.00 7699.33 6553.60 16505.36 00:34:44.291 =================================================================================================================== 00:34:44.291 Total : 16595.77 64.83 0.00 0.00 7699.33 6553.60 16505.36 00:34:44.291 Received shutdown signal, test time was about 1.000000 seconds 00:34:44.291 00:34:44.291 Latency(us) 00:34:44.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.291 =================================================================================================================== 00:34:44.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:44.291 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:44.291 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:34:44.292 02:40:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:34:44.292 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:44.292 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:34:44.292 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:44.292 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:34:44.292 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:44.292 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:44.292 rmmod nvme_tcp 00:34:44.292 rmmod nvme_fabrics 00:34:44.292 rmmod nvme_keyring 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1925941 ']' 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1925941 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1925941 ']' 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1925941 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1925941 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1925941' 00:34:44.550 killing process with pid 1925941 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1925941 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1925941 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:44.550 02:40:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.142 02:40:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:47.142 00:34:47.142 real 0m6.597s 00:34:47.142 user 0m10.309s 00:34:47.142 sys 0m1.934s 00:34:47.142 02:40:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:47.142 02:40:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:47.142 ************************************ 00:34:47.142 END TEST nvmf_multicontroller 00:34:47.142 ************************************ 00:34:47.142 02:40:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:47.142 02:40:37 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:34:47.142 02:40:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:47.142 02:40:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.142 02:40:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.142 ************************************ 00:34:47.142 START TEST nvmf_aer 00:34:47.142 ************************************ 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:34:47.142 * Looking for test storage... 00:34:47.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:47.142 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:47.143 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:47.143 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.143 02:40:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:47.143 02:40:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.143 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:47.143 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:47.143 02:40:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:34:47.143 02:40:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:48.520 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:48.520 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:48.520 Found net devices under 0000:08:00.0: cvl_0_0 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.520 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:48.521 Found net devices under 0000:08:00.1: cvl_0_1 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:48.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:34:48.521 00:34:48.521 --- 10.0.0.2 ping statistics --- 00:34:48.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.521 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:34:48.521 00:34:48.521 --- 10.0.0.1 ping statistics --- 00:34:48.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.521 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1927673 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1927673 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1927673 ']' 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:48.521 02:40:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:48.521 [2024-07-11 02:40:38.889646] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:48.521 [2024-07-11 02:40:38.889740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.521 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.780 [2024-07-11 02:40:38.956653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:48.780 [2024-07-11 02:40:39.044687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.780 [2024-07-11 02:40:39.044741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.780 [2024-07-11 02:40:39.044758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.780 [2024-07-11 02:40:39.044772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.780 [2024-07-11 02:40:39.044784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.780 [2024-07-11 02:40:39.044837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.780 [2024-07-11 02:40:39.045153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.780 [2024-07-11 02:40:39.045241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:48.780 [2024-07-11 02:40:39.045362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:48.780 [2024-07-11 02:40:39.177123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.780 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.039 Malloc0 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.039 [2024-07-11 02:40:39.226047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.039 [ 00:34:49.039 { 00:34:49.039 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:49.039 "subtype": "Discovery", 00:34:49.039 "listen_addresses": [], 00:34:49.039 "allow_any_host": true, 00:34:49.039 "hosts": [] 00:34:49.039 }, 00:34:49.039 { 00:34:49.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:49.039 "subtype": "NVMe", 00:34:49.039 "listen_addresses": [ 00:34:49.039 { 00:34:49.039 "trtype": "TCP", 00:34:49.039 "adrfam": "IPv4", 00:34:49.039 "traddr": "10.0.0.2", 00:34:49.039 "trsvcid": "4420" 00:34:49.039 } 00:34:49.039 ], 00:34:49.039 "allow_any_host": true, 00:34:49.039 "hosts": [], 00:34:49.039 "serial_number": "SPDK00000000000001", 00:34:49.039 "model_number": "SPDK bdev Controller", 00:34:49.039 "max_namespaces": 2, 00:34:49.039 "min_cntlid": 1, 00:34:49.039 "max_cntlid": 65519, 00:34:49.039 "namespaces": [ 00:34:49.039 { 00:34:49.039 "nsid": 1, 00:34:49.039 "bdev_name": "Malloc0", 00:34:49.039 "name": "Malloc0", 00:34:49.039 "nguid": "9CA089C300FD4A63B2CCB6D08F09D414", 00:34:49.039 "uuid": "9ca089c3-00fd-4a63-b2cc-b6d08f09d414" 00:34:49.039 } 00:34:49.039 ] 00:34:49.039 } 00:34:49.039 ] 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1927786 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:34:49.039 EAL: No free 2048 kB hugepages reported on node 1 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.039 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.297 Malloc1 00:34:49.297 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.297 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:34:49.297 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.297 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.297 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.297 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:34:49.297 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.297 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.297 [ 00:34:49.297 { 00:34:49.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:49.297 "subtype": "Discovery", 00:34:49.297 "listen_addresses": [], 00:34:49.297 "allow_any_host": true, 00:34:49.297 "hosts": [] 00:34:49.297 }, 00:34:49.297 { 00:34:49.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:49.297 "subtype": "NVMe", 00:34:49.297 "listen_addresses": [ 00:34:49.297 { 00:34:49.297 "trtype": "TCP", 00:34:49.297 "adrfam": "IPv4", 00:34:49.297 "traddr": "10.0.0.2", 00:34:49.297 "trsvcid": "4420" 00:34:49.297 } 00:34:49.297 ], 00:34:49.297 "allow_any_host": true, 00:34:49.297 "hosts": [], 00:34:49.297 "serial_number": "SPDK00000000000001", 00:34:49.297 "model_number": "SPDK bdev Controller", 00:34:49.297 "max_namespaces": 2, 00:34:49.297 "min_cntlid": 1, 00:34:49.297 "max_cntlid": 65519, 00:34:49.297 "namespaces": [ 00:34:49.297 { 00:34:49.297 "nsid": 1, 00:34:49.297 "bdev_name": "Malloc0", 00:34:49.297 "name": "Malloc0", 00:34:49.298 "nguid": "9CA089C300FD4A63B2CCB6D08F09D414", 00:34:49.298 "uuid": "9ca089c3-00fd-4a63-b2cc-b6d08f09d414" 00:34:49.298 }, 00:34:49.298 { 00:34:49.298 "nsid": 2, 00:34:49.298 "bdev_name": "Malloc1", 00:34:49.298 "name": "Malloc1", 00:34:49.298 "nguid": "A6CB7B9D7CC84479868A78ED85AEE8B1", 00:34:49.298 "uuid": "a6cb7b9d-7cc8-4479-868a-78ed85aee8b1" 00:34:49.298 } 00:34:49.298 ] 00:34:49.298 } 00:34:49.298 ] 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1927786 00:34:49.298 Asynchronous Event Request test 00:34:49.298 Attaching to 10.0.0.2 00:34:49.298 Attached to 10.0.0.2 00:34:49.298 Registering asynchronous event callbacks... 00:34:49.298 Starting namespace attribute notice tests for all controllers... 00:34:49.298 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:34:49.298 aer_cb - Changed Namespace 00:34:49.298 Cleaning up... 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:49.298 rmmod nvme_tcp 00:34:49.298 rmmod nvme_fabrics 00:34:49.298 rmmod nvme_keyring 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1927673 ']' 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1927673 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1927673 ']' 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1927673 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1927673 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1927673' 00:34:49.298 killing process with pid 1927673 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1927673 00:34:49.298 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1927673 00:34:49.558 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:49.558 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:49.558 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:49.558 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:49.558 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:49.558 02:40:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.558 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:49.558 02:40:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.465 02:40:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:51.465 00:34:51.465 real 0m4.823s 00:34:51.465 user 0m3.666s 00:34:51.465 sys 0m1.624s 00:34:51.465 02:40:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:51.465 02:40:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:51.465 ************************************ 00:34:51.465 END TEST nvmf_aer 00:34:51.465 ************************************ 00:34:51.724 02:40:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:51.724 02:40:41 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:34:51.724 02:40:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:51.724 02:40:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:51.724 02:40:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:51.724 ************************************ 00:34:51.725 START TEST nvmf_async_init 00:34:51.725 ************************************ 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:34:51.725 * Looking for test storage... 00:34:51.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:51.725 02:40:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ff16229fb9914d9ca399f2811b5d863f 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:34:51.725 02:40:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:53.631 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:53.631 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:53.631 Found net devices under 0000:08:00.0: cvl_0_0 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:53.631 Found net devices under 0000:08:00.1: cvl_0_1 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:53.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:34:53.631 00:34:53.631 --- 10.0.0.2 ping statistics --- 00:34:53.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.631 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:34:53.631 00:34:53.631 --- 10.0.0.1 ping statistics --- 00:34:53.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.631 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1929277 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1929277 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1929277 ']' 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.631 02:40:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:34:53.632 02:40:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:53.632 02:40:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.632 [2024-07-11 02:40:43.810725] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:53.632 [2024-07-11 02:40:43.810817] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.632 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.632 [2024-07-11 02:40:43.875851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.632 [2024-07-11 02:40:43.962026] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.632 [2024-07-11 02:40:43.962086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.632 [2024-07-11 02:40:43.962102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.632 [2024-07-11 02:40:43.962116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.632 [2024-07-11 02:40:43.962130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.632 [2024-07-11 02:40:43.962159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.890 [2024-07-11 02:40:44.088221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.890 null0 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ff16229fb9914d9ca399f2811b5d863f 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:53.890 [2024-07-11 02:40:44.128415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:53.890 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.147 nvme0n1 00:34:54.147 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.147 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:54.147 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.147 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.147 [ 00:34:54.147 { 00:34:54.147 "name": "nvme0n1", 00:34:54.147 "aliases": [ 00:34:54.147 "ff16229f-b991-4d9c-a399-f2811b5d863f" 00:34:54.147 ], 00:34:54.147 "product_name": "NVMe disk", 00:34:54.147 "block_size": 512, 00:34:54.147 "num_blocks": 2097152, 00:34:54.147 "uuid": "ff16229f-b991-4d9c-a399-f2811b5d863f", 00:34:54.147 "assigned_rate_limits": { 00:34:54.147 "rw_ios_per_sec": 0, 00:34:54.147 "rw_mbytes_per_sec": 0, 00:34:54.147 "r_mbytes_per_sec": 0, 00:34:54.147 "w_mbytes_per_sec": 0 00:34:54.147 }, 00:34:54.147 "claimed": false, 00:34:54.147 "zoned": false, 00:34:54.147 "supported_io_types": { 00:34:54.147 "read": true, 00:34:54.147 "write": true, 00:34:54.147 "unmap": false, 00:34:54.147 "flush": true, 00:34:54.147 "reset": true, 00:34:54.147 "nvme_admin": true, 00:34:54.147 "nvme_io": true, 00:34:54.147 "nvme_io_md": false, 00:34:54.148 "write_zeroes": true, 00:34:54.148 "zcopy": false, 00:34:54.148 "get_zone_info": false, 00:34:54.148 "zone_management": false, 00:34:54.148 "zone_append": false, 00:34:54.148 "compare": true, 00:34:54.148 "compare_and_write": true, 00:34:54.148 "abort": true, 00:34:54.148 "seek_hole": false, 00:34:54.148 "seek_data": false, 00:34:54.148 "copy": true, 00:34:54.148 "nvme_iov_md": false 00:34:54.148 }, 00:34:54.148 "memory_domains": [ 00:34:54.148 { 00:34:54.148 "dma_device_id": "system", 00:34:54.148 "dma_device_type": 1 00:34:54.148 } 00:34:54.148 ], 00:34:54.148 "driver_specific": { 00:34:54.148 "nvme": [ 00:34:54.148 { 00:34:54.148 "trid": { 00:34:54.148 "trtype": "TCP", 00:34:54.148 "adrfam": "IPv4", 00:34:54.148 "traddr": "10.0.0.2", 00:34:54.148 "trsvcid": "4420", 00:34:54.148 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:54.148 }, 00:34:54.148 "ctrlr_data": { 00:34:54.148 "cntlid": 1, 00:34:54.148 "vendor_id": "0x8086", 00:34:54.148 "model_number": "SPDK bdev Controller", 00:34:54.148 "serial_number": "00000000000000000000", 00:34:54.148 "firmware_revision": "24.09", 00:34:54.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.148 "oacs": { 00:34:54.148 "security": 0, 00:34:54.148 "format": 0, 00:34:54.148 "firmware": 0, 00:34:54.148 "ns_manage": 0 00:34:54.148 }, 00:34:54.148 "multi_ctrlr": true, 00:34:54.148 "ana_reporting": false 00:34:54.148 }, 00:34:54.148 "vs": { 00:34:54.148 "nvme_version": "1.3" 00:34:54.148 }, 00:34:54.148 "ns_data": { 00:34:54.148 "id": 1, 00:34:54.148 "can_share": true 00:34:54.148 } 00:34:54.148 } 00:34:54.148 ], 00:34:54.148 "mp_policy": "active_passive" 00:34:54.148 } 00:34:54.148 } 00:34:54.148 ] 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.148 [2024-07-11 02:40:44.381664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:54.148 [2024-07-11 02:40:44.381768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x768820 (9): Bad file descriptor 00:34:54.148 [2024-07-11 02:40:44.523695] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.148 [ 00:34:54.148 { 00:34:54.148 "name": "nvme0n1", 00:34:54.148 "aliases": [ 00:34:54.148 "ff16229f-b991-4d9c-a399-f2811b5d863f" 00:34:54.148 ], 00:34:54.148 "product_name": "NVMe disk", 00:34:54.148 "block_size": 512, 00:34:54.148 "num_blocks": 2097152, 00:34:54.148 "uuid": "ff16229f-b991-4d9c-a399-f2811b5d863f", 00:34:54.148 "assigned_rate_limits": { 00:34:54.148 "rw_ios_per_sec": 0, 00:34:54.148 "rw_mbytes_per_sec": 0, 00:34:54.148 "r_mbytes_per_sec": 0, 00:34:54.148 "w_mbytes_per_sec": 0 00:34:54.148 }, 00:34:54.148 "claimed": false, 00:34:54.148 "zoned": false, 00:34:54.148 "supported_io_types": { 00:34:54.148 "read": true, 00:34:54.148 "write": true, 00:34:54.148 "unmap": false, 00:34:54.148 "flush": true, 00:34:54.148 "reset": true, 00:34:54.148 "nvme_admin": true, 00:34:54.148 "nvme_io": true, 00:34:54.148 "nvme_io_md": false, 00:34:54.148 "write_zeroes": true, 00:34:54.148 "zcopy": false, 00:34:54.148 "get_zone_info": false, 00:34:54.148 "zone_management": false, 00:34:54.148 "zone_append": false, 00:34:54.148 "compare": true, 00:34:54.148 "compare_and_write": true, 00:34:54.148 "abort": true, 00:34:54.148 "seek_hole": false, 00:34:54.148 "seek_data": false, 00:34:54.148 "copy": true, 00:34:54.148 "nvme_iov_md": false 00:34:54.148 }, 00:34:54.148 "memory_domains": [ 00:34:54.148 { 00:34:54.148 "dma_device_id": "system", 00:34:54.148 "dma_device_type": 1 00:34:54.148 } 00:34:54.148 ], 00:34:54.148 "driver_specific": { 00:34:54.148 "nvme": [ 00:34:54.148 { 00:34:54.148 "trid": { 00:34:54.148 "trtype": "TCP", 00:34:54.148 "adrfam": "IPv4", 00:34:54.148 "traddr": "10.0.0.2", 00:34:54.148 "trsvcid": "4420", 00:34:54.148 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:54.148 }, 00:34:54.148 "ctrlr_data": { 00:34:54.148 "cntlid": 2, 00:34:54.148 "vendor_id": "0x8086", 00:34:54.148 "model_number": "SPDK bdev Controller", 00:34:54.148 "serial_number": "00000000000000000000", 00:34:54.148 "firmware_revision": "24.09", 00:34:54.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.148 "oacs": { 00:34:54.148 "security": 0, 00:34:54.148 "format": 0, 00:34:54.148 "firmware": 0, 00:34:54.148 "ns_manage": 0 00:34:54.148 }, 00:34:54.148 "multi_ctrlr": true, 00:34:54.148 "ana_reporting": false 00:34:54.148 }, 00:34:54.148 "vs": { 00:34:54.148 "nvme_version": "1.3" 00:34:54.148 }, 00:34:54.148 "ns_data": { 00:34:54.148 "id": 1, 00:34:54.148 "can_share": true 00:34:54.148 } 00:34:54.148 } 00:34:54.148 ], 00:34:54.148 "mp_policy": "active_passive" 00:34:54.148 } 00:34:54.148 } 00:34:54.148 ] 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FHordYIoVr 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FHordYIoVr 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.148 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.407 [2024-07-11 02:40:44.574421] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:54.407 [2024-07-11 02:40:44.574601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FHordYIoVr 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.407 [2024-07-11 02:40:44.582421] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FHordYIoVr 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.407 [2024-07-11 02:40:44.590450] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:54.407 [2024-07-11 02:40:44.590525] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:34:54.407 nvme0n1 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.407 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.407 [ 00:34:54.407 { 00:34:54.407 "name": "nvme0n1", 00:34:54.407 "aliases": [ 00:34:54.407 "ff16229f-b991-4d9c-a399-f2811b5d863f" 00:34:54.407 ], 00:34:54.407 "product_name": "NVMe disk", 00:34:54.407 "block_size": 512, 00:34:54.407 "num_blocks": 2097152, 00:34:54.407 "uuid": "ff16229f-b991-4d9c-a399-f2811b5d863f", 00:34:54.407 "assigned_rate_limits": { 00:34:54.407 "rw_ios_per_sec": 0, 00:34:54.408 "rw_mbytes_per_sec": 0, 00:34:54.408 "r_mbytes_per_sec": 0, 00:34:54.408 "w_mbytes_per_sec": 0 00:34:54.408 }, 00:34:54.408 "claimed": false, 00:34:54.408 "zoned": false, 00:34:54.408 "supported_io_types": { 00:34:54.408 "read": true, 00:34:54.408 "write": true, 00:34:54.408 "unmap": false, 00:34:54.408 "flush": true, 00:34:54.408 "reset": true, 00:34:54.408 "nvme_admin": true, 00:34:54.408 "nvme_io": true, 00:34:54.408 "nvme_io_md": false, 00:34:54.408 "write_zeroes": true, 00:34:54.408 "zcopy": false, 00:34:54.408 "get_zone_info": false, 00:34:54.408 "zone_management": false, 00:34:54.408 "zone_append": false, 00:34:54.408 "compare": true, 00:34:54.408 "compare_and_write": true, 00:34:54.408 "abort": true, 00:34:54.408 "seek_hole": false, 00:34:54.408 "seek_data": false, 00:34:54.408 "copy": true, 00:34:54.408 "nvme_iov_md": false 00:34:54.408 }, 00:34:54.408 "memory_domains": [ 00:34:54.408 { 00:34:54.408 "dma_device_id": "system", 00:34:54.408 "dma_device_type": 1 00:34:54.408 } 00:34:54.408 ], 00:34:54.408 "driver_specific": { 00:34:54.408 "nvme": [ 00:34:54.408 { 00:34:54.408 "trid": { 00:34:54.408 "trtype": "TCP", 00:34:54.408 "adrfam": "IPv4", 00:34:54.408 "traddr": "10.0.0.2", 00:34:54.408 "trsvcid": "4421", 00:34:54.408 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:54.408 }, 00:34:54.408 "ctrlr_data": { 00:34:54.408 "cntlid": 3, 00:34:54.408 "vendor_id": "0x8086", 00:34:54.408 "model_number": "SPDK bdev Controller", 00:34:54.408 "serial_number": "00000000000000000000", 00:34:54.408 "firmware_revision": "24.09", 00:34:54.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:54.408 "oacs": { 00:34:54.408 "security": 0, 00:34:54.408 "format": 0, 00:34:54.408 "firmware": 0, 00:34:54.408 "ns_manage": 0 00:34:54.408 }, 00:34:54.408 "multi_ctrlr": true, 00:34:54.408 "ana_reporting": false 00:34:54.408 }, 00:34:54.408 "vs": { 00:34:54.408 "nvme_version": "1.3" 00:34:54.408 }, 00:34:54.408 "ns_data": { 00:34:54.408 "id": 1, 00:34:54.408 "can_share": true 00:34:54.408 } 00:34:54.408 } 00:34:54.408 ], 00:34:54.408 "mp_policy": "active_passive" 00:34:54.408 } 00:34:54.408 } 00:34:54.408 ] 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.FHordYIoVr 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:54.408 rmmod nvme_tcp 00:34:54.408 rmmod nvme_fabrics 00:34:54.408 rmmod nvme_keyring 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1929277 ']' 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1929277 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1929277 ']' 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1929277 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1929277 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1929277' 00:34:54.408 killing process with pid 1929277 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1929277 00:34:54.408 [2024-07-11 02:40:44.785567] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:34:54.408 [2024-07-11 02:40:44.785603] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:54.408 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1929277 00:34:54.668 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:54.668 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:54.668 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:54.668 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:54.668 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:54.668 02:40:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.668 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:54.668 02:40:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.573 02:40:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:56.573 00:34:56.573 real 0m5.050s 00:34:56.573 user 0m1.903s 00:34:56.573 sys 0m1.558s 00:34:56.573 02:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:56.573 02:40:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:56.573 ************************************ 00:34:56.573 END TEST nvmf_async_init 00:34:56.573 ************************************ 00:34:56.831 02:40:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:56.831 02:40:47 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:34:56.831 02:40:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:56.831 02:40:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:56.831 02:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.831 ************************************ 00:34:56.831 START TEST dma 00:34:56.831 ************************************ 00:34:56.831 02:40:47 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:34:56.831 * Looking for test storage... 00:34:56.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:56.832 02:40:47 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.832 02:40:47 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.832 02:40:47 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.832 02:40:47 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.832 02:40:47 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.832 02:40:47 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.832 02:40:47 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.832 02:40:47 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:34:56.832 02:40:47 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:56.832 02:40:47 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:56.832 02:40:47 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:34:56.832 02:40:47 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:34:56.832 00:34:56.832 real 0m0.072s 00:34:56.832 user 0m0.034s 00:34:56.832 sys 0m0.043s 00:34:56.832 02:40:47 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:56.832 02:40:47 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:34:56.832 ************************************ 00:34:56.832 END TEST dma 00:34:56.832 ************************************ 00:34:56.832 02:40:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:56.832 02:40:47 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:34:56.832 02:40:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:56.832 02:40:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:56.832 02:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.832 ************************************ 00:34:56.832 START TEST nvmf_identify 00:34:56.832 ************************************ 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:34:56.832 * Looking for test storage... 00:34:56.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:56.832 02:40:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:34:56.833 02:40:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:58.736 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:58.737 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:58.737 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:58.737 Found net devices under 0000:08:00.0: cvl_0_0 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:58.737 Found net devices under 0000:08:00.1: cvl_0_1 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:58.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:58.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:34:58.737 00:34:58.737 --- 10.0.0.2 ping statistics --- 00:34:58.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.737 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:58.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:58.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:34:58.737 00:34:58.737 --- 10.0.0.1 ping statistics --- 00:34:58.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:58.737 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1930893 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1930893 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1930893 ']' 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:58.737 02:40:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.737 [2024-07-11 02:40:48.928425] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:58.737 [2024-07-11 02:40:48.928531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:58.737 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.737 [2024-07-11 02:40:48.993456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:58.737 [2024-07-11 02:40:49.082565] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:58.737 [2024-07-11 02:40:49.082625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:58.737 [2024-07-11 02:40:49.082642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:58.738 [2024-07-11 02:40:49.082656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:58.738 [2024-07-11 02:40:49.082668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:58.738 [2024-07-11 02:40:49.082741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.738 [2024-07-11 02:40:49.082826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:58.738 [2024-07-11 02:40:49.082907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:58.738 [2024-07-11 02:40:49.082911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.997 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:58.997 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:34:58.997 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:58.997 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.997 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.997 [2024-07-11 02:40:49.199153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:58.997 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.997 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.998 Malloc0 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.998 [2024-07-11 02:40:49.267887] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:58.998 [ 00:34:58.998 { 00:34:58.998 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:58.998 "subtype": "Discovery", 00:34:58.998 "listen_addresses": [ 00:34:58.998 { 00:34:58.998 "trtype": "TCP", 00:34:58.998 "adrfam": "IPv4", 00:34:58.998 "traddr": "10.0.0.2", 00:34:58.998 "trsvcid": "4420" 00:34:58.998 } 00:34:58.998 ], 00:34:58.998 "allow_any_host": true, 00:34:58.998 "hosts": [] 00:34:58.998 }, 00:34:58.998 { 00:34:58.998 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.998 "subtype": "NVMe", 00:34:58.998 "listen_addresses": [ 00:34:58.998 { 00:34:58.998 "trtype": "TCP", 00:34:58.998 "adrfam": "IPv4", 00:34:58.998 "traddr": "10.0.0.2", 00:34:58.998 "trsvcid": "4420" 00:34:58.998 } 00:34:58.998 ], 00:34:58.998 "allow_any_host": true, 00:34:58.998 "hosts": [], 00:34:58.998 "serial_number": "SPDK00000000000001", 00:34:58.998 "model_number": "SPDK bdev Controller", 00:34:58.998 "max_namespaces": 32, 00:34:58.998 "min_cntlid": 1, 00:34:58.998 "max_cntlid": 65519, 00:34:58.998 "namespaces": [ 00:34:58.998 { 00:34:58.998 "nsid": 1, 00:34:58.998 "bdev_name": "Malloc0", 00:34:58.998 "name": "Malloc0", 00:34:58.998 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:34:58.998 "eui64": "ABCDEF0123456789", 00:34:58.998 "uuid": "17f16cfe-9831-4321-9895-9c8cbf1185e1" 00:34:58.998 } 00:34:58.998 ] 00:34:58.998 } 00:34:58.998 ] 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.998 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:34:58.998 [2024-07-11 02:40:49.308277] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:58.998 [2024-07-11 02:40:49.308329] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930958 ] 00:34:58.998 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.998 [2024-07-11 02:40:49.349304] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:34:58.998 [2024-07-11 02:40:49.349369] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:34:58.998 [2024-07-11 02:40:49.349380] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:34:58.998 [2024-07-11 02:40:49.349398] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:34:58.998 [2024-07-11 02:40:49.349410] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:34:58.998 [2024-07-11 02:40:49.349637] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:34:58.998 [2024-07-11 02:40:49.349698] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x23519f0 0 00:34:58.998 [2024-07-11 02:40:49.360535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:34:58.998 [2024-07-11 02:40:49.360556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:34:58.998 [2024-07-11 02:40:49.360566] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:34:58.998 [2024-07-11 02:40:49.360573] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:34:58.998 [2024-07-11 02:40:49.360629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:58.998 [2024-07-11 02:40:49.360643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:58.998 [2024-07-11 02:40:49.360652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:58.998 [2024-07-11 02:40:49.360670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:58.998 [2024-07-11 02:40:49.360698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:58.998 [2024-07-11 02:40:49.368526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:58.998 [2024-07-11 02:40:49.368544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:58.998 [2024-07-11 02:40:49.368553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:58.998 [2024-07-11 02:40:49.368562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:58.998 [2024-07-11 02:40:49.368579] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:34:58.998 [2024-07-11 02:40:49.368591] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:34:58.998 [2024-07-11 02:40:49.368602] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:34:58.998 [2024-07-11 02:40:49.368625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:58.998 [2024-07-11 02:40:49.368635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:58.998 [2024-07-11 02:40:49.368643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:58.998 [2024-07-11 02:40:49.368656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.998 [2024-07-11 02:40:49.368682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:58.998 [2024-07-11 02:40:49.368797] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:58.998 [2024-07-11 02:40:49.368813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:58.998 [2024-07-11 02:40:49.368821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:58.998 [2024-07-11 02:40:49.368829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:58.998 [2024-07-11 02:40:49.368839] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:34:58.998 [2024-07-11 02:40:49.368854] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:34:58.998 [2024-07-11 02:40:49.368874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:58.998 [2024-07-11 02:40:49.368884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:58.998 [2024-07-11 02:40:49.368892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:58.998 [2024-07-11 02:40:49.368904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.998 [2024-07-11 02:40:49.368927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:58.999 [2024-07-11 02:40:49.369025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:58.999 [2024-07-11 02:40:49.369040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:58.999 [2024-07-11 02:40:49.369048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:58.999 [2024-07-11 02:40:49.369066] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:34:58.999 [2024-07-11 02:40:49.369081] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:34:58.999 [2024-07-11 02:40:49.369095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:58.999 [2024-07-11 02:40:49.369123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.999 [2024-07-11 02:40:49.369145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:58.999 [2024-07-11 02:40:49.369237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:58.999 [2024-07-11 02:40:49.369252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:58.999 [2024-07-11 02:40:49.369260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:58.999 [2024-07-11 02:40:49.369279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:34:58.999 [2024-07-11 02:40:49.369298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:58.999 [2024-07-11 02:40:49.369327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.999 [2024-07-11 02:40:49.369349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:58.999 [2024-07-11 02:40:49.369449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:58.999 [2024-07-11 02:40:49.369463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:58.999 [2024-07-11 02:40:49.369470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:58.999 [2024-07-11 02:40:49.369489] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:34:58.999 [2024-07-11 02:40:49.369499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:34:58.999 [2024-07-11 02:40:49.369522] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:34:58.999 [2024-07-11 02:40:49.369635] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:34:58.999 [2024-07-11 02:40:49.369649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:34:58.999 [2024-07-11 02:40:49.369665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:58.999 [2024-07-11 02:40:49.369694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.999 [2024-07-11 02:40:49.369718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:58.999 [2024-07-11 02:40:49.369813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:58.999 [2024-07-11 02:40:49.369826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:58.999 [2024-07-11 02:40:49.369834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:58.999 [2024-07-11 02:40:49.369852] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:34:58.999 [2024-07-11 02:40:49.369870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.369887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:58.999 [2024-07-11 02:40:49.369900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.999 [2024-07-11 02:40:49.369923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:58.999 [2024-07-11 02:40:49.370017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:58.999 [2024-07-11 02:40:49.370031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:58.999 [2024-07-11 02:40:49.370039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.370047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:58.999 [2024-07-11 02:40:49.370056] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:34:58.999 [2024-07-11 02:40:49.370066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:34:58.999 [2024-07-11 02:40:49.370081] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:34:58.999 [2024-07-11 02:40:49.370098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:34:58.999 [2024-07-11 02:40:49.370114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.370124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:58.999 [2024-07-11 02:40:49.370136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.999 [2024-07-11 02:40:49.370159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:58.999 [2024-07-11 02:40:49.370301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:58.999 [2024-07-11 02:40:49.370320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:58.999 [2024-07-11 02:40:49.370328] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.370336] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23519f0): datao=0, datal=4096, cccid=0 00:34:58.999 [2024-07-11 02:40:49.370350] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a8100) on tqpair(0x23519f0): expected_datao=0, payload_size=4096 00:34:58.999 [2024-07-11 02:40:49.370360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.370373] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.370382] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.370397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:58.999 [2024-07-11 02:40:49.370408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:58.999 [2024-07-11 02:40:49.370415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.370423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:58.999 [2024-07-11 02:40:49.370437] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:34:58.999 [2024-07-11 02:40:49.370452] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:34:58.999 [2024-07-11 02:40:49.370462] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:34:58.999 [2024-07-11 02:40:49.370472] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:34:58.999 [2024-07-11 02:40:49.370481] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:34:58.999 [2024-07-11 02:40:49.370490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:34:58.999 [2024-07-11 02:40:49.370507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:34:58.999 [2024-07-11 02:40:49.370530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.370539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:58.999 [2024-07-11 02:40:49.370547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:59.000 [2024-07-11 02:40:49.370560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:59.000 [2024-07-11 02:40:49.370583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:59.000 [2024-07-11 02:40:49.370686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.000 [2024-07-11 02:40:49.370700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.000 [2024-07-11 02:40:49.370708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:59.000 [2024-07-11 02:40:49.370729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23519f0) 00:34:59.000 [2024-07-11 02:40:49.370757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.000 [2024-07-11 02:40:49.370768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370784] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x23519f0) 00:34:59.000 [2024-07-11 02:40:49.370794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.000 [2024-07-11 02:40:49.370805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x23519f0) 00:34:59.000 [2024-07-11 02:40:49.370836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.000 [2024-07-11 02:40:49.370847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.000 [2024-07-11 02:40:49.370873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.000 [2024-07-11 02:40:49.370883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:34:59.000 [2024-07-11 02:40:49.370903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:34:59.000 [2024-07-11 02:40:49.370918] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.370926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23519f0) 00:34:59.000 [2024-07-11 02:40:49.370938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.000 [2024-07-11 02:40:49.370963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8100, cid 0, qid 0 00:34:59.000 [2024-07-11 02:40:49.370975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8280, cid 1, qid 0 00:34:59.000 [2024-07-11 02:40:49.370984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8400, cid 2, qid 0 00:34:59.000 [2024-07-11 02:40:49.370994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.000 [2024-07-11 02:40:49.371003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8700, cid 4, qid 0 00:34:59.000 [2024-07-11 02:40:49.371124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.000 [2024-07-11 02:40:49.371137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.000 [2024-07-11 02:40:49.371145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.371153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8700) on tqpair=0x23519f0 00:34:59.000 [2024-07-11 02:40:49.371164] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:34:59.000 [2024-07-11 02:40:49.371174] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:34:59.000 [2024-07-11 02:40:49.371193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.371203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23519f0) 00:34:59.000 [2024-07-11 02:40:49.371215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.000 [2024-07-11 02:40:49.371239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8700, cid 4, qid 0 00:34:59.000 [2024-07-11 02:40:49.371348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.000 [2024-07-11 02:40:49.371366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.000 [2024-07-11 02:40:49.371375] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.371382] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23519f0): datao=0, datal=4096, cccid=4 00:34:59.000 [2024-07-11 02:40:49.371392] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a8700) on tqpair(0x23519f0): expected_datao=0, payload_size=4096 00:34:59.000 [2024-07-11 02:40:49.371400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.371423] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.371433] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.411600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.000 [2024-07-11 02:40:49.411623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.000 [2024-07-11 02:40:49.411631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.411640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8700) on tqpair=0x23519f0 00:34:59.000 [2024-07-11 02:40:49.411661] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:34:59.000 [2024-07-11 02:40:49.411698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.411710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23519f0) 00:34:59.000 [2024-07-11 02:40:49.411724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.000 [2024-07-11 02:40:49.411737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.411746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.411753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23519f0) 00:34:59.000 [2024-07-11 02:40:49.411764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.000 [2024-07-11 02:40:49.411795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8700, cid 4, qid 0 00:34:59.000 [2024-07-11 02:40:49.411809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8880, cid 5, qid 0 00:34:59.000 [2024-07-11 02:40:49.411941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.000 [2024-07-11 02:40:49.411955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.000 [2024-07-11 02:40:49.411963] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.411971] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23519f0): datao=0, datal=1024, cccid=4 00:34:59.000 [2024-07-11 02:40:49.411980] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a8700) on tqpair(0x23519f0): expected_datao=0, payload_size=1024 00:34:59.000 [2024-07-11 02:40:49.411989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.412001] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.412009] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.412020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.000 [2024-07-11 02:40:49.412030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.000 [2024-07-11 02:40:49.412038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.000 [2024-07-11 02:40:49.412046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8880) on tqpair=0x23519f0 00:34:59.266 [2024-07-11 02:40:49.456527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.266 [2024-07-11 02:40:49.456550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.266 [2024-07-11 02:40:49.456560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.456569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8700) on tqpair=0x23519f0 00:34:59.266 [2024-07-11 02:40:49.456589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.456599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23519f0) 00:34:59.266 [2024-07-11 02:40:49.456613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.266 [2024-07-11 02:40:49.456649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8700, cid 4, qid 0 00:34:59.266 [2024-07-11 02:40:49.456767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.266 [2024-07-11 02:40:49.456787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.266 [2024-07-11 02:40:49.456796] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.456804] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23519f0): datao=0, datal=3072, cccid=4 00:34:59.266 [2024-07-11 02:40:49.456813] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a8700) on tqpair(0x23519f0): expected_datao=0, payload_size=3072 00:34:59.266 [2024-07-11 02:40:49.456822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.456834] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.456843] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.456857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.266 [2024-07-11 02:40:49.456868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.266 [2024-07-11 02:40:49.456876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.456884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8700) on tqpair=0x23519f0 00:34:59.266 [2024-07-11 02:40:49.456901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.456910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23519f0) 00:34:59.266 [2024-07-11 02:40:49.456923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.266 [2024-07-11 02:40:49.456953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8700, cid 4, qid 0 00:34:59.266 [2024-07-11 02:40:49.457067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.266 [2024-07-11 02:40:49.457082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.266 [2024-07-11 02:40:49.457090] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.457097] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23519f0): datao=0, datal=8, cccid=4 00:34:59.266 [2024-07-11 02:40:49.457108] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23a8700) on tqpair(0x23519f0): expected_datao=0, payload_size=8 00:34:59.266 [2024-07-11 02:40:49.457117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.457129] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.457137] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.499543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.266 [2024-07-11 02:40:49.499564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.266 [2024-07-11 02:40:49.499572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.266 [2024-07-11 02:40:49.499581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8700) on tqpair=0x23519f0 00:34:59.266 ===================================================== 00:34:59.266 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:59.266 ===================================================== 00:34:59.266 Controller Capabilities/Features 00:34:59.266 ================================ 00:34:59.266 Vendor ID: 0000 00:34:59.266 Subsystem Vendor ID: 0000 00:34:59.266 Serial Number: .................... 00:34:59.266 Model Number: ........................................ 00:34:59.266 Firmware Version: 24.09 00:34:59.266 Recommended Arb Burst: 0 00:34:59.266 IEEE OUI Identifier: 00 00 00 00:34:59.266 Multi-path I/O 00:34:59.266 May have multiple subsystem ports: No 00:34:59.266 May have multiple controllers: No 00:34:59.266 Associated with SR-IOV VF: No 00:34:59.266 Max Data Transfer Size: 131072 00:34:59.266 Max Number of Namespaces: 0 00:34:59.266 Max Number of I/O Queues: 1024 00:34:59.266 NVMe Specification Version (VS): 1.3 00:34:59.266 NVMe Specification Version (Identify): 1.3 00:34:59.266 Maximum Queue Entries: 128 00:34:59.266 Contiguous Queues Required: Yes 00:34:59.266 Arbitration Mechanisms Supported 00:34:59.266 Weighted Round Robin: Not Supported 00:34:59.266 Vendor Specific: Not Supported 00:34:59.266 Reset Timeout: 15000 ms 00:34:59.266 Doorbell Stride: 4 bytes 00:34:59.266 NVM Subsystem Reset: Not Supported 00:34:59.266 Command Sets Supported 00:34:59.266 NVM Command Set: Supported 00:34:59.266 Boot Partition: Not Supported 00:34:59.266 Memory Page Size Minimum: 4096 bytes 00:34:59.266 Memory Page Size Maximum: 4096 bytes 00:34:59.266 Persistent Memory Region: Not Supported 00:34:59.266 Optional Asynchronous Events Supported 00:34:59.266 Namespace Attribute Notices: Not Supported 00:34:59.266 Firmware Activation Notices: Not Supported 00:34:59.266 ANA Change Notices: Not Supported 00:34:59.266 PLE Aggregate Log Change Notices: Not Supported 00:34:59.266 LBA Status Info Alert Notices: Not Supported 00:34:59.266 EGE Aggregate Log Change Notices: Not Supported 00:34:59.266 Normal NVM Subsystem Shutdown event: Not Supported 00:34:59.266 Zone Descriptor Change Notices: Not Supported 00:34:59.266 Discovery Log Change Notices: Supported 00:34:59.266 Controller Attributes 00:34:59.266 128-bit Host Identifier: Not Supported 00:34:59.266 Non-Operational Permissive Mode: Not Supported 00:34:59.266 NVM Sets: Not Supported 00:34:59.266 Read Recovery Levels: Not Supported 00:34:59.266 Endurance Groups: Not Supported 00:34:59.266 Predictable Latency Mode: Not Supported 00:34:59.266 Traffic Based Keep ALive: Not Supported 00:34:59.266 Namespace Granularity: Not Supported 00:34:59.266 SQ Associations: Not Supported 00:34:59.266 UUID List: Not Supported 00:34:59.266 Multi-Domain Subsystem: Not Supported 00:34:59.266 Fixed Capacity Management: Not Supported 00:34:59.266 Variable Capacity Management: Not Supported 00:34:59.266 Delete Endurance Group: Not Supported 00:34:59.266 Delete NVM Set: Not Supported 00:34:59.266 Extended LBA Formats Supported: Not Supported 00:34:59.266 Flexible Data Placement Supported: Not Supported 00:34:59.266 00:34:59.266 Controller Memory Buffer Support 00:34:59.266 ================================ 00:34:59.266 Supported: No 00:34:59.266 00:34:59.266 Persistent Memory Region Support 00:34:59.266 ================================ 00:34:59.266 Supported: No 00:34:59.266 00:34:59.266 Admin Command Set Attributes 00:34:59.266 ============================ 00:34:59.267 Security Send/Receive: Not Supported 00:34:59.267 Format NVM: Not Supported 00:34:59.267 Firmware Activate/Download: Not Supported 00:34:59.267 Namespace Management: Not Supported 00:34:59.267 Device Self-Test: Not Supported 00:34:59.267 Directives: Not Supported 00:34:59.267 NVMe-MI: Not Supported 00:34:59.267 Virtualization Management: Not Supported 00:34:59.267 Doorbell Buffer Config: Not Supported 00:34:59.267 Get LBA Status Capability: Not Supported 00:34:59.267 Command & Feature Lockdown Capability: Not Supported 00:34:59.267 Abort Command Limit: 1 00:34:59.267 Async Event Request Limit: 4 00:34:59.267 Number of Firmware Slots: N/A 00:34:59.267 Firmware Slot 1 Read-Only: N/A 00:34:59.267 Firmware Activation Without Reset: N/A 00:34:59.267 Multiple Update Detection Support: N/A 00:34:59.267 Firmware Update Granularity: No Information Provided 00:34:59.267 Per-Namespace SMART Log: No 00:34:59.267 Asymmetric Namespace Access Log Page: Not Supported 00:34:59.267 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:59.267 Command Effects Log Page: Not Supported 00:34:59.267 Get Log Page Extended Data: Supported 00:34:59.267 Telemetry Log Pages: Not Supported 00:34:59.267 Persistent Event Log Pages: Not Supported 00:34:59.267 Supported Log Pages Log Page: May Support 00:34:59.267 Commands Supported & Effects Log Page: Not Supported 00:34:59.267 Feature Identifiers & Effects Log Page:May Support 00:34:59.267 NVMe-MI Commands & Effects Log Page: May Support 00:34:59.267 Data Area 4 for Telemetry Log: Not Supported 00:34:59.267 Error Log Page Entries Supported: 128 00:34:59.267 Keep Alive: Not Supported 00:34:59.267 00:34:59.267 NVM Command Set Attributes 00:34:59.267 ========================== 00:34:59.267 Submission Queue Entry Size 00:34:59.267 Max: 1 00:34:59.267 Min: 1 00:34:59.267 Completion Queue Entry Size 00:34:59.267 Max: 1 00:34:59.267 Min: 1 00:34:59.267 Number of Namespaces: 0 00:34:59.267 Compare Command: Not Supported 00:34:59.267 Write Uncorrectable Command: Not Supported 00:34:59.267 Dataset Management Command: Not Supported 00:34:59.267 Write Zeroes Command: Not Supported 00:34:59.267 Set Features Save Field: Not Supported 00:34:59.267 Reservations: Not Supported 00:34:59.267 Timestamp: Not Supported 00:34:59.267 Copy: Not Supported 00:34:59.267 Volatile Write Cache: Not Present 00:34:59.267 Atomic Write Unit (Normal): 1 00:34:59.267 Atomic Write Unit (PFail): 1 00:34:59.267 Atomic Compare & Write Unit: 1 00:34:59.267 Fused Compare & Write: Supported 00:34:59.267 Scatter-Gather List 00:34:59.267 SGL Command Set: Supported 00:34:59.267 SGL Keyed: Supported 00:34:59.267 SGL Bit Bucket Descriptor: Not Supported 00:34:59.267 SGL Metadata Pointer: Not Supported 00:34:59.267 Oversized SGL: Not Supported 00:34:59.267 SGL Metadata Address: Not Supported 00:34:59.267 SGL Offset: Supported 00:34:59.267 Transport SGL Data Block: Not Supported 00:34:59.267 Replay Protected Memory Block: Not Supported 00:34:59.267 00:34:59.267 Firmware Slot Information 00:34:59.267 ========================= 00:34:59.267 Active slot: 0 00:34:59.267 00:34:59.267 00:34:59.267 Error Log 00:34:59.267 ========= 00:34:59.267 00:34:59.267 Active Namespaces 00:34:59.267 ================= 00:34:59.267 Discovery Log Page 00:34:59.267 ================== 00:34:59.267 Generation Counter: 2 00:34:59.267 Number of Records: 2 00:34:59.267 Record Format: 0 00:34:59.267 00:34:59.267 Discovery Log Entry 0 00:34:59.267 ---------------------- 00:34:59.267 Transport Type: 3 (TCP) 00:34:59.267 Address Family: 1 (IPv4) 00:34:59.267 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:59.267 Entry Flags: 00:34:59.267 Duplicate Returned Information: 1 00:34:59.267 Explicit Persistent Connection Support for Discovery: 1 00:34:59.267 Transport Requirements: 00:34:59.267 Secure Channel: Not Required 00:34:59.267 Port ID: 0 (0x0000) 00:34:59.267 Controller ID: 65535 (0xffff) 00:34:59.267 Admin Max SQ Size: 128 00:34:59.267 Transport Service Identifier: 4420 00:34:59.267 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:59.267 Transport Address: 10.0.0.2 00:34:59.267 Discovery Log Entry 1 00:34:59.267 ---------------------- 00:34:59.267 Transport Type: 3 (TCP) 00:34:59.267 Address Family: 1 (IPv4) 00:34:59.267 Subsystem Type: 2 (NVM Subsystem) 00:34:59.267 Entry Flags: 00:34:59.267 Duplicate Returned Information: 0 00:34:59.267 Explicit Persistent Connection Support for Discovery: 0 00:34:59.267 Transport Requirements: 00:34:59.267 Secure Channel: Not Required 00:34:59.267 Port ID: 0 (0x0000) 00:34:59.267 Controller ID: 65535 (0xffff) 00:34:59.267 Admin Max SQ Size: 128 00:34:59.267 Transport Service Identifier: 4420 00:34:59.267 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:34:59.267 Transport Address: 10.0.0.2 [2024-07-11 02:40:49.499706] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:34:59.267 [2024-07-11 02:40:49.499730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8100) on tqpair=0x23519f0 00:34:59.267 [2024-07-11 02:40:49.499743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.267 [2024-07-11 02:40:49.499753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8280) on tqpair=0x23519f0 00:34:59.267 [2024-07-11 02:40:49.499763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.267 [2024-07-11 02:40:49.499772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8400) on tqpair=0x23519f0 00:34:59.267 [2024-07-11 02:40:49.499781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.267 [2024-07-11 02:40:49.499791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.267 [2024-07-11 02:40:49.499803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.267 [2024-07-11 02:40:49.499823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.267 [2024-07-11 02:40:49.499834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.267 [2024-07-11 02:40:49.499842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.267 [2024-07-11 02:40:49.499855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.267 [2024-07-11 02:40:49.499882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.267 [2024-07-11 02:40:49.499971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.267 [2024-07-11 02:40:49.499985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.267 [2024-07-11 02:40:49.499993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.267 [2024-07-11 02:40:49.500001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.267 [2024-07-11 02:40:49.500016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.267 [2024-07-11 02:40:49.500026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.267 [2024-07-11 02:40:49.500033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.500046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.500075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.500198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.500213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.500221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.268 [2024-07-11 02:40:49.500241] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:34:59.268 [2024-07-11 02:40:49.500250] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:34:59.268 [2024-07-11 02:40:49.500268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.500298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.500320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.500419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.500435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.500442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.268 [2024-07-11 02:40:49.500469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.500500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.500531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.500641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.500657] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.500665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.268 [2024-07-11 02:40:49.500691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.500721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.500744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.500831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.500845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.500853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.268 [2024-07-11 02:40:49.500879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.500896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.500908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.500933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.501035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.501051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.501058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.268 [2024-07-11 02:40:49.501085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.501114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.501137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.501229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.501243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.501251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.268 [2024-07-11 02:40:49.501276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.501306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.501328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.501419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.501437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.501446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.268 [2024-07-11 02:40:49.501473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.501502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.501533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.501627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.501643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.501651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.268 [2024-07-11 02:40:49.501677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.501707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.501730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.501826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.501839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.501847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.268 [2024-07-11 02:40:49.501873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.268 [2024-07-11 02:40:49.501891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.268 [2024-07-11 02:40:49.501903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.268 [2024-07-11 02:40:49.501924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.268 [2024-07-11 02:40:49.502015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.268 [2024-07-11 02:40:49.502030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.268 [2024-07-11 02:40:49.502038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.502065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.269 [2024-07-11 02:40:49.502095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.269 [2024-07-11 02:40:49.502116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.269 [2024-07-11 02:40:49.502203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.269 [2024-07-11 02:40:49.502218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.269 [2024-07-11 02:40:49.502226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.502258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.269 [2024-07-11 02:40:49.502288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.269 [2024-07-11 02:40:49.502311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.269 [2024-07-11 02:40:49.502396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.269 [2024-07-11 02:40:49.502409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.269 [2024-07-11 02:40:49.502417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.502443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.269 [2024-07-11 02:40:49.502473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.269 [2024-07-11 02:40:49.502495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.269 [2024-07-11 02:40:49.502592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.269 [2024-07-11 02:40:49.502606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.269 [2024-07-11 02:40:49.502614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.502640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.269 [2024-07-11 02:40:49.502670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.269 [2024-07-11 02:40:49.502693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.269 [2024-07-11 02:40:49.502791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.269 [2024-07-11 02:40:49.502804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.269 [2024-07-11 02:40:49.502812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.502844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.502861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.269 [2024-07-11 02:40:49.502873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.269 [2024-07-11 02:40:49.502895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.269 [2024-07-11 02:40:49.502994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.269 [2024-07-11 02:40:49.503010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.269 [2024-07-11 02:40:49.503017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.503025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.503048] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.503059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.503067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.269 [2024-07-11 02:40:49.503079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.269 [2024-07-11 02:40:49.503101] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.269 [2024-07-11 02:40:49.503185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.269 [2024-07-11 02:40:49.503199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.269 [2024-07-11 02:40:49.503207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.503215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.503233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.503242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.503250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.269 [2024-07-11 02:40:49.503262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.269 [2024-07-11 02:40:49.503285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.269 [2024-07-11 02:40:49.503384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.269 [2024-07-11 02:40:49.503399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.269 [2024-07-11 02:40:49.503407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.503415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.503433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.503443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.503451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.269 [2024-07-11 02:40:49.503463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.269 [2024-07-11 02:40:49.503487] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.269 [2024-07-11 02:40:49.507532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.269 [2024-07-11 02:40:49.507550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.269 [2024-07-11 02:40:49.507558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.507566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.507586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.507597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.507604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23519f0) 00:34:59.269 [2024-07-11 02:40:49.507617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.269 [2024-07-11 02:40:49.507641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23a8580, cid 3, qid 0 00:34:59.269 [2024-07-11 02:40:49.507738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.269 [2024-07-11 02:40:49.507752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.269 [2024-07-11 02:40:49.507759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.269 [2024-07-11 02:40:49.507767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23a8580) on tqpair=0x23519f0 00:34:59.269 [2024-07-11 02:40:49.507787] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:34:59.269 00:34:59.269 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:34:59.269 [2024-07-11 02:40:49.543468] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:34:59.270 [2024-07-11 02:40:49.543557] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930966 ] 00:34:59.270 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.270 [2024-07-11 02:40:49.585974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:34:59.270 [2024-07-11 02:40:49.586037] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:34:59.270 [2024-07-11 02:40:49.586049] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:34:59.270 [2024-07-11 02:40:49.586064] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:34:59.270 [2024-07-11 02:40:49.586075] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:34:59.270 [2024-07-11 02:40:49.586223] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:34:59.270 [2024-07-11 02:40:49.586263] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21e69f0 0 00:34:59.270 [2024-07-11 02:40:49.599545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:34:59.270 [2024-07-11 02:40:49.599566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:34:59.270 [2024-07-11 02:40:49.599576] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:34:59.270 [2024-07-11 02:40:49.599583] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:34:59.270 [2024-07-11 02:40:49.599629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.599643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.599651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.270 [2024-07-11 02:40:49.599667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:59.270 [2024-07-11 02:40:49.599694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.270 [2024-07-11 02:40:49.606540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.270 [2024-07-11 02:40:49.606559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.270 [2024-07-11 02:40:49.606568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.606576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.270 [2024-07-11 02:40:49.606594] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:34:59.270 [2024-07-11 02:40:49.606606] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:34:59.270 [2024-07-11 02:40:49.606616] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:34:59.270 [2024-07-11 02:40:49.606638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.606647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.606655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.270 [2024-07-11 02:40:49.606672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.270 [2024-07-11 02:40:49.606698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.270 [2024-07-11 02:40:49.606802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.270 [2024-07-11 02:40:49.606817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.270 [2024-07-11 02:40:49.606825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.606833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.270 [2024-07-11 02:40:49.606842] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:34:59.270 [2024-07-11 02:40:49.606859] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:34:59.270 [2024-07-11 02:40:49.606873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.606882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.606889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.270 [2024-07-11 02:40:49.606901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.270 [2024-07-11 02:40:49.606924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.270 [2024-07-11 02:40:49.607016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.270 [2024-07-11 02:40:49.607031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.270 [2024-07-11 02:40:49.607039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.270 [2024-07-11 02:40:49.607060] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:34:59.270 [2024-07-11 02:40:49.607076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:34:59.270 [2024-07-11 02:40:49.607090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.270 [2024-07-11 02:40:49.607120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.270 [2024-07-11 02:40:49.607142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.270 [2024-07-11 02:40:49.607234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.270 [2024-07-11 02:40:49.607249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.270 [2024-07-11 02:40:49.607257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.270 [2024-07-11 02:40:49.607277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:34:59.270 [2024-07-11 02:40:49.607296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.270 [2024-07-11 02:40:49.607328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.270 [2024-07-11 02:40:49.607350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.270 [2024-07-11 02:40:49.607446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.270 [2024-07-11 02:40:49.607465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.270 [2024-07-11 02:40:49.607474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.270 [2024-07-11 02:40:49.607491] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:34:59.270 [2024-07-11 02:40:49.607500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:34:59.270 [2024-07-11 02:40:49.607526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:34:59.270 [2024-07-11 02:40:49.607639] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:34:59.270 [2024-07-11 02:40:49.607647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:34:59.270 [2024-07-11 02:40:49.607661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.270 [2024-07-11 02:40:49.607689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.270 [2024-07-11 02:40:49.607712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.270 [2024-07-11 02:40:49.607805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.270 [2024-07-11 02:40:49.607820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.270 [2024-07-11 02:40:49.607828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.270 [2024-07-11 02:40:49.607848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:34:59.270 [2024-07-11 02:40:49.607866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.270 [2024-07-11 02:40:49.607883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.271 [2024-07-11 02:40:49.607898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.271 [2024-07-11 02:40:49.607920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.271 [2024-07-11 02:40:49.608017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.271 [2024-07-11 02:40:49.608032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.271 [2024-07-11 02:40:49.608040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.271 [2024-07-11 02:40:49.608056] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:34:59.271 [2024-07-11 02:40:49.608066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:34:59.271 [2024-07-11 02:40:49.608084] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:34:59.271 [2024-07-11 02:40:49.608100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:34:59.271 [2024-07-11 02:40:49.608115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.271 [2024-07-11 02:40:49.608139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.271 [2024-07-11 02:40:49.608161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.271 [2024-07-11 02:40:49.608323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.271 [2024-07-11 02:40:49.608339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.271 [2024-07-11 02:40:49.608347] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608370] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e69f0): datao=0, datal=4096, cccid=0 00:34:59.271 [2024-07-11 02:40:49.608380] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223d100) on tqpair(0x21e69f0): expected_datao=0, payload_size=4096 00:34:59.271 [2024-07-11 02:40:49.608389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608401] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608410] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.271 [2024-07-11 02:40:49.608435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.271 [2024-07-11 02:40:49.608442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.271 [2024-07-11 02:40:49.608462] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:34:59.271 [2024-07-11 02:40:49.608476] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:34:59.271 [2024-07-11 02:40:49.608485] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:34:59.271 [2024-07-11 02:40:49.608493] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:34:59.271 [2024-07-11 02:40:49.608502] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:34:59.271 [2024-07-11 02:40:49.608520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:34:59.271 [2024-07-11 02:40:49.608540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:34:59.271 [2024-07-11 02:40:49.608554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.271 [2024-07-11 02:40:49.608582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:59.271 [2024-07-11 02:40:49.608605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.271 [2024-07-11 02:40:49.608705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.271 [2024-07-11 02:40:49.608721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.271 [2024-07-11 02:40:49.608728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.271 [2024-07-11 02:40:49.608752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21e69f0) 00:34:59.271 [2024-07-11 02:40:49.608779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.271 [2024-07-11 02:40:49.608795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21e69f0) 00:34:59.271 [2024-07-11 02:40:49.608821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.271 [2024-07-11 02:40:49.608832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21e69f0) 00:34:59.271 [2024-07-11 02:40:49.608857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.271 [2024-07-11 02:40:49.608868] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.271 [2024-07-11 02:40:49.608883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.271 [2024-07-11 02:40:49.608892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.271 [2024-07-11 02:40:49.608902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:34:59.271 [2024-07-11 02:40:49.608924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:34:59.271 [2024-07-11 02:40:49.608939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.608947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e69f0) 00:34:59.272 [2024-07-11 02:40:49.608958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.272 [2024-07-11 02:40:49.608983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d100, cid 0, qid 0 00:34:59.272 [2024-07-11 02:40:49.608996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d280, cid 1, qid 0 00:34:59.272 [2024-07-11 02:40:49.609005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d400, cid 2, qid 0 00:34:59.272 [2024-07-11 02:40:49.609014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.272 [2024-07-11 02:40:49.609023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d700, cid 4, qid 0 00:34:59.272 [2024-07-11 02:40:49.609148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.272 [2024-07-11 02:40:49.609163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.272 [2024-07-11 02:40:49.609171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d700) on tqpair=0x21e69f0 00:34:59.272 [2024-07-11 02:40:49.609191] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:34:59.272 [2024-07-11 02:40:49.609201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.609216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.609230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.609243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e69f0) 00:34:59.272 [2024-07-11 02:40:49.609275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:59.272 [2024-07-11 02:40:49.609298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d700, cid 4, qid 0 00:34:59.272 [2024-07-11 02:40:49.609394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.272 [2024-07-11 02:40:49.609409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.272 [2024-07-11 02:40:49.609417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d700) on tqpair=0x21e69f0 00:34:59.272 [2024-07-11 02:40:49.609499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.609529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.609547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e69f0) 00:34:59.272 [2024-07-11 02:40:49.609567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.272 [2024-07-11 02:40:49.609590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d700, cid 4, qid 0 00:34:59.272 [2024-07-11 02:40:49.609704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.272 [2024-07-11 02:40:49.609720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.272 [2024-07-11 02:40:49.609727] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609748] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e69f0): datao=0, datal=4096, cccid=4 00:34:59.272 [2024-07-11 02:40:49.609759] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223d700) on tqpair(0x21e69f0): expected_datao=0, payload_size=4096 00:34:59.272 [2024-07-11 02:40:49.609768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609780] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609788] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.272 [2024-07-11 02:40:49.609813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.272 [2024-07-11 02:40:49.609821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d700) on tqpair=0x21e69f0 00:34:59.272 [2024-07-11 02:40:49.609846] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:34:59.272 [2024-07-11 02:40:49.609869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.609890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.609905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.609914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e69f0) 00:34:59.272 [2024-07-11 02:40:49.609925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.272 [2024-07-11 02:40:49.609948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d700, cid 4, qid 0 00:34:59.272 [2024-07-11 02:40:49.610070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.272 [2024-07-11 02:40:49.610089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.272 [2024-07-11 02:40:49.610098] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.610119] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e69f0): datao=0, datal=4096, cccid=4 00:34:59.272 [2024-07-11 02:40:49.610130] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223d700) on tqpair(0x21e69f0): expected_datao=0, payload_size=4096 00:34:59.272 [2024-07-11 02:40:49.610138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.610150] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.610159] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.610172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.272 [2024-07-11 02:40:49.610183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.272 [2024-07-11 02:40:49.610190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.610198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d700) on tqpair=0x21e69f0 00:34:59.272 [2024-07-11 02:40:49.610219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.610242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.610258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.610267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e69f0) 00:34:59.272 [2024-07-11 02:40:49.610279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.272 [2024-07-11 02:40:49.610301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d700, cid 4, qid 0 00:34:59.272 [2024-07-11 02:40:49.610412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.272 [2024-07-11 02:40:49.610428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.272 [2024-07-11 02:40:49.610436] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.610443] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e69f0): datao=0, datal=4096, cccid=4 00:34:59.272 [2024-07-11 02:40:49.610467] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223d700) on tqpair(0x21e69f0): expected_datao=0, payload_size=4096 00:34:59.272 [2024-07-11 02:40:49.610476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.610488] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.610496] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.614517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.272 [2024-07-11 02:40:49.614535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.272 [2024-07-11 02:40:49.614543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.272 [2024-07-11 02:40:49.614551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d700) on tqpair=0x21e69f0 00:34:59.272 [2024-07-11 02:40:49.614566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.614586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.614603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:34:59.272 [2024-07-11 02:40:49.614616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:34:59.273 [2024-07-11 02:40:49.614626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:34:59.273 [2024-07-11 02:40:49.614642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:34:59.273 [2024-07-11 02:40:49.614653] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:34:59.273 [2024-07-11 02:40:49.614662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:34:59.273 [2024-07-11 02:40:49.614671] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:34:59.273 [2024-07-11 02:40:49.614692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.614701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e69f0) 00:34:59.273 [2024-07-11 02:40:49.614714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.273 [2024-07-11 02:40:49.614726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.614734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.614741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21e69f0) 00:34:59.273 [2024-07-11 02:40:49.614752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:34:59.273 [2024-07-11 02:40:49.614779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d700, cid 4, qid 0 00:34:59.273 [2024-07-11 02:40:49.614792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d880, cid 5, qid 0 00:34:59.273 [2024-07-11 02:40:49.614913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.273 [2024-07-11 02:40:49.614929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.273 [2024-07-11 02:40:49.614937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.614945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d700) on tqpair=0x21e69f0 00:34:59.273 [2024-07-11 02:40:49.614956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.273 [2024-07-11 02:40:49.614967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.273 [2024-07-11 02:40:49.614974] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.614982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d880) on tqpair=0x21e69f0 00:34:59.273 [2024-07-11 02:40:49.615001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21e69f0) 00:34:59.273 [2024-07-11 02:40:49.615024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.273 [2024-07-11 02:40:49.615046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d880, cid 5, qid 0 00:34:59.273 [2024-07-11 02:40:49.615147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.273 [2024-07-11 02:40:49.615162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.273 [2024-07-11 02:40:49.615170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d880) on tqpair=0x21e69f0 00:34:59.273 [2024-07-11 02:40:49.615198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21e69f0) 00:34:59.273 [2024-07-11 02:40:49.615220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.273 [2024-07-11 02:40:49.615244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d880, cid 5, qid 0 00:34:59.273 [2024-07-11 02:40:49.615344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.273 [2024-07-11 02:40:49.615359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.273 [2024-07-11 02:40:49.615367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d880) on tqpair=0x21e69f0 00:34:59.273 [2024-07-11 02:40:49.615394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21e69f0) 00:34:59.273 [2024-07-11 02:40:49.615416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.273 [2024-07-11 02:40:49.615438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d880, cid 5, qid 0 00:34:59.273 [2024-07-11 02:40:49.615540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.273 [2024-07-11 02:40:49.615556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.273 [2024-07-11 02:40:49.615564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d880) on tqpair=0x21e69f0 00:34:59.273 [2024-07-11 02:40:49.615599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21e69f0) 00:34:59.273 [2024-07-11 02:40:49.615623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.273 [2024-07-11 02:40:49.615636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21e69f0) 00:34:59.273 [2024-07-11 02:40:49.615655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.273 [2024-07-11 02:40:49.615668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x21e69f0) 00:34:59.273 [2024-07-11 02:40:49.615687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.273 [2024-07-11 02:40:49.615700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.615708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21e69f0) 00:34:59.273 [2024-07-11 02:40:49.615719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.273 [2024-07-11 02:40:49.615743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d880, cid 5, qid 0 00:34:59.273 [2024-07-11 02:40:49.615755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d700, cid 4, qid 0 00:34:59.273 [2024-07-11 02:40:49.615764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223da00, cid 6, qid 0 00:34:59.273 [2024-07-11 02:40:49.615773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223db80, cid 7, qid 0 00:34:59.273 [2024-07-11 02:40:49.615966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.273 [2024-07-11 02:40:49.616023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.273 [2024-07-11 02:40:49.616033] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616041] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e69f0): datao=0, datal=8192, cccid=5 00:34:59.273 [2024-07-11 02:40:49.616050] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223d880) on tqpair(0x21e69f0): expected_datao=0, payload_size=8192 00:34:59.273 [2024-07-11 02:40:49.616062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616099] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616110] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.273 [2024-07-11 02:40:49.616135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.273 [2024-07-11 02:40:49.616143] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616150] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e69f0): datao=0, datal=512, cccid=4 00:34:59.273 [2024-07-11 02:40:49.616159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223d700) on tqpair(0x21e69f0): expected_datao=0, payload_size=512 00:34:59.273 [2024-07-11 02:40:49.616167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616178] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616186] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.273 [2024-07-11 02:40:49.616213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.273 [2024-07-11 02:40:49.616220] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616227] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e69f0): datao=0, datal=512, cccid=6 00:34:59.273 [2024-07-11 02:40:49.616236] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223da00) on tqpair(0x21e69f0): expected_datao=0, payload_size=512 00:34:59.273 [2024-07-11 02:40:49.616244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616255] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616262] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:59.273 [2024-07-11 02:40:49.616282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:59.273 [2024-07-11 02:40:49.616290] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:59.273 [2024-07-11 02:40:49.616297] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21e69f0): datao=0, datal=4096, cccid=7 00:34:59.273 [2024-07-11 02:40:49.616305] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x223db80) on tqpair(0x21e69f0): expected_datao=0, payload_size=4096 00:34:59.273 [2024-07-11 02:40:49.616314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.274 [2024-07-11 02:40:49.616325] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:59.274 [2024-07-11 02:40:49.616333] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:59.274 [2024-07-11 02:40:49.616342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.274 [2024-07-11 02:40:49.616353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.274 [2024-07-11 02:40:49.616360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.274 [2024-07-11 02:40:49.616368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d880) on tqpair=0x21e69f0 00:34:59.274 [2024-07-11 02:40:49.616388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.274 [2024-07-11 02:40:49.616400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.274 [2024-07-11 02:40:49.616408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.274 [2024-07-11 02:40:49.616415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d700) on tqpair=0x21e69f0 00:34:59.274 [2024-07-11 02:40:49.616432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.274 [2024-07-11 02:40:49.616444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.274 [2024-07-11 02:40:49.616451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.274 [2024-07-11 02:40:49.616459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223da00) on tqpair=0x21e69f0 00:34:59.274 [2024-07-11 02:40:49.616470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.274 [2024-07-11 02:40:49.616485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.274 [2024-07-11 02:40:49.616493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.274 [2024-07-11 02:40:49.616500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223db80) on tqpair=0x21e69f0 00:34:59.274 ===================================================== 00:34:59.274 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:59.274 ===================================================== 00:34:59.274 Controller Capabilities/Features 00:34:59.274 ================================ 00:34:59.274 Vendor ID: 8086 00:34:59.274 Subsystem Vendor ID: 8086 00:34:59.274 Serial Number: SPDK00000000000001 00:34:59.274 Model Number: SPDK bdev Controller 00:34:59.274 Firmware Version: 24.09 00:34:59.274 Recommended Arb Burst: 6 00:34:59.274 IEEE OUI Identifier: e4 d2 5c 00:34:59.274 Multi-path I/O 00:34:59.274 May have multiple subsystem ports: Yes 00:34:59.274 May have multiple controllers: Yes 00:34:59.274 Associated with SR-IOV VF: No 00:34:59.274 Max Data Transfer Size: 131072 00:34:59.274 Max Number of Namespaces: 32 00:34:59.274 Max Number of I/O Queues: 127 00:34:59.274 NVMe Specification Version (VS): 1.3 00:34:59.274 NVMe Specification Version (Identify): 1.3 00:34:59.274 Maximum Queue Entries: 128 00:34:59.274 Contiguous Queues Required: Yes 00:34:59.274 Arbitration Mechanisms Supported 00:34:59.274 Weighted Round Robin: Not Supported 00:34:59.274 Vendor Specific: Not Supported 00:34:59.274 Reset Timeout: 15000 ms 00:34:59.274 Doorbell Stride: 4 bytes 00:34:59.274 NVM Subsystem Reset: Not Supported 00:34:59.274 Command Sets Supported 00:34:59.274 NVM Command Set: Supported 00:34:59.274 Boot Partition: Not Supported 00:34:59.274 Memory Page Size Minimum: 4096 bytes 00:34:59.274 Memory Page Size Maximum: 4096 bytes 00:34:59.274 Persistent Memory Region: Not Supported 00:34:59.274 Optional Asynchronous Events Supported 00:34:59.274 Namespace Attribute Notices: Supported 00:34:59.274 Firmware Activation Notices: Not Supported 00:34:59.274 ANA Change Notices: Not Supported 00:34:59.274 PLE Aggregate Log Change Notices: Not Supported 00:34:59.274 LBA Status Info Alert Notices: Not Supported 00:34:59.274 EGE Aggregate Log Change Notices: Not Supported 00:34:59.274 Normal NVM Subsystem Shutdown event: Not Supported 00:34:59.274 Zone Descriptor Change Notices: Not Supported 00:34:59.274 Discovery Log Change Notices: Not Supported 00:34:59.274 Controller Attributes 00:34:59.274 128-bit Host Identifier: Supported 00:34:59.274 Non-Operational Permissive Mode: Not Supported 00:34:59.274 NVM Sets: Not Supported 00:34:59.274 Read Recovery Levels: Not Supported 00:34:59.274 Endurance Groups: Not Supported 00:34:59.274 Predictable Latency Mode: Not Supported 00:34:59.274 Traffic Based Keep ALive: Not Supported 00:34:59.274 Namespace Granularity: Not Supported 00:34:59.274 SQ Associations: Not Supported 00:34:59.274 UUID List: Not Supported 00:34:59.274 Multi-Domain Subsystem: Not Supported 00:34:59.274 Fixed Capacity Management: Not Supported 00:34:59.274 Variable Capacity Management: Not Supported 00:34:59.274 Delete Endurance Group: Not Supported 00:34:59.274 Delete NVM Set: Not Supported 00:34:59.274 Extended LBA Formats Supported: Not Supported 00:34:59.274 Flexible Data Placement Supported: Not Supported 00:34:59.274 00:34:59.274 Controller Memory Buffer Support 00:34:59.274 ================================ 00:34:59.274 Supported: No 00:34:59.274 00:34:59.274 Persistent Memory Region Support 00:34:59.274 ================================ 00:34:59.274 Supported: No 00:34:59.274 00:34:59.274 Admin Command Set Attributes 00:34:59.274 ============================ 00:34:59.274 Security Send/Receive: Not Supported 00:34:59.274 Format NVM: Not Supported 00:34:59.274 Firmware Activate/Download: Not Supported 00:34:59.274 Namespace Management: Not Supported 00:34:59.274 Device Self-Test: Not Supported 00:34:59.274 Directives: Not Supported 00:34:59.274 NVMe-MI: Not Supported 00:34:59.274 Virtualization Management: Not Supported 00:34:59.274 Doorbell Buffer Config: Not Supported 00:34:59.274 Get LBA Status Capability: Not Supported 00:34:59.274 Command & Feature Lockdown Capability: Not Supported 00:34:59.274 Abort Command Limit: 4 00:34:59.274 Async Event Request Limit: 4 00:34:59.274 Number of Firmware Slots: N/A 00:34:59.274 Firmware Slot 1 Read-Only: N/A 00:34:59.274 Firmware Activation Without Reset: N/A 00:34:59.274 Multiple Update Detection Support: N/A 00:34:59.274 Firmware Update Granularity: No Information Provided 00:34:59.274 Per-Namespace SMART Log: No 00:34:59.274 Asymmetric Namespace Access Log Page: Not Supported 00:34:59.274 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:34:59.274 Command Effects Log Page: Supported 00:34:59.274 Get Log Page Extended Data: Supported 00:34:59.274 Telemetry Log Pages: Not Supported 00:34:59.274 Persistent Event Log Pages: Not Supported 00:34:59.274 Supported Log Pages Log Page: May Support 00:34:59.274 Commands Supported & Effects Log Page: Not Supported 00:34:59.274 Feature Identifiers & Effects Log Page:May Support 00:34:59.274 NVMe-MI Commands & Effects Log Page: May Support 00:34:59.274 Data Area 4 for Telemetry Log: Not Supported 00:34:59.274 Error Log Page Entries Supported: 128 00:34:59.274 Keep Alive: Supported 00:34:59.274 Keep Alive Granularity: 10000 ms 00:34:59.274 00:34:59.274 NVM Command Set Attributes 00:34:59.274 ========================== 00:34:59.274 Submission Queue Entry Size 00:34:59.274 Max: 64 00:34:59.274 Min: 64 00:34:59.274 Completion Queue Entry Size 00:34:59.274 Max: 16 00:34:59.274 Min: 16 00:34:59.274 Number of Namespaces: 32 00:34:59.274 Compare Command: Supported 00:34:59.274 Write Uncorrectable Command: Not Supported 00:34:59.274 Dataset Management Command: Supported 00:34:59.275 Write Zeroes Command: Supported 00:34:59.275 Set Features Save Field: Not Supported 00:34:59.275 Reservations: Supported 00:34:59.275 Timestamp: Not Supported 00:34:59.275 Copy: Supported 00:34:59.275 Volatile Write Cache: Present 00:34:59.275 Atomic Write Unit (Normal): 1 00:34:59.275 Atomic Write Unit (PFail): 1 00:34:59.275 Atomic Compare & Write Unit: 1 00:34:59.275 Fused Compare & Write: Supported 00:34:59.275 Scatter-Gather List 00:34:59.275 SGL Command Set: Supported 00:34:59.275 SGL Keyed: Supported 00:34:59.275 SGL Bit Bucket Descriptor: Not Supported 00:34:59.275 SGL Metadata Pointer: Not Supported 00:34:59.275 Oversized SGL: Not Supported 00:34:59.275 SGL Metadata Address: Not Supported 00:34:59.275 SGL Offset: Supported 00:34:59.275 Transport SGL Data Block: Not Supported 00:34:59.275 Replay Protected Memory Block: Not Supported 00:34:59.275 00:34:59.275 Firmware Slot Information 00:34:59.275 ========================= 00:34:59.275 Active slot: 1 00:34:59.275 Slot 1 Firmware Revision: 24.09 00:34:59.275 00:34:59.275 00:34:59.275 Commands Supported and Effects 00:34:59.275 ============================== 00:34:59.275 Admin Commands 00:34:59.275 -------------- 00:34:59.275 Get Log Page (02h): Supported 00:34:59.275 Identify (06h): Supported 00:34:59.275 Abort (08h): Supported 00:34:59.275 Set Features (09h): Supported 00:34:59.275 Get Features (0Ah): Supported 00:34:59.275 Asynchronous Event Request (0Ch): Supported 00:34:59.275 Keep Alive (18h): Supported 00:34:59.275 I/O Commands 00:34:59.275 ------------ 00:34:59.275 Flush (00h): Supported LBA-Change 00:34:59.275 Write (01h): Supported LBA-Change 00:34:59.275 Read (02h): Supported 00:34:59.275 Compare (05h): Supported 00:34:59.275 Write Zeroes (08h): Supported LBA-Change 00:34:59.275 Dataset Management (09h): Supported LBA-Change 00:34:59.275 Copy (19h): Supported LBA-Change 00:34:59.275 00:34:59.275 Error Log 00:34:59.275 ========= 00:34:59.275 00:34:59.275 Arbitration 00:34:59.275 =========== 00:34:59.275 Arbitration Burst: 1 00:34:59.275 00:34:59.275 Power Management 00:34:59.275 ================ 00:34:59.275 Number of Power States: 1 00:34:59.275 Current Power State: Power State #0 00:34:59.275 Power State #0: 00:34:59.275 Max Power: 0.00 W 00:34:59.275 Non-Operational State: Operational 00:34:59.275 Entry Latency: Not Reported 00:34:59.275 Exit Latency: Not Reported 00:34:59.275 Relative Read Throughput: 0 00:34:59.275 Relative Read Latency: 0 00:34:59.275 Relative Write Throughput: 0 00:34:59.275 Relative Write Latency: 0 00:34:59.275 Idle Power: Not Reported 00:34:59.275 Active Power: Not Reported 00:34:59.275 Non-Operational Permissive Mode: Not Supported 00:34:59.275 00:34:59.275 Health Information 00:34:59.275 ================== 00:34:59.275 Critical Warnings: 00:34:59.275 Available Spare Space: OK 00:34:59.275 Temperature: OK 00:34:59.275 Device Reliability: OK 00:34:59.275 Read Only: No 00:34:59.275 Volatile Memory Backup: OK 00:34:59.275 Current Temperature: 0 Kelvin (-273 Celsius) 00:34:59.275 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:34:59.275 Available Spare: 0% 00:34:59.275 Available Spare Threshold: 0% 00:34:59.275 Life Percentage Used:[2024-07-11 02:40:49.616652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.616665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21e69f0) 00:34:59.275 [2024-07-11 02:40:49.616678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.275 [2024-07-11 02:40:49.616702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223db80, cid 7, qid 0 00:34:59.275 [2024-07-11 02:40:49.616811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.275 [2024-07-11 02:40:49.616827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.275 [2024-07-11 02:40:49.616835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.616843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223db80) on tqpair=0x21e69f0 00:34:59.275 [2024-07-11 02:40:49.616892] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:34:59.275 [2024-07-11 02:40:49.616915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d100) on tqpair=0x21e69f0 00:34:59.275 [2024-07-11 02:40:49.616927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.275 [2024-07-11 02:40:49.616937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d280) on tqpair=0x21e69f0 00:34:59.275 [2024-07-11 02:40:49.616945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.275 [2024-07-11 02:40:49.616955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d400) on tqpair=0x21e69f0 00:34:59.275 [2024-07-11 02:40:49.616963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.275 [2024-07-11 02:40:49.616973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.275 [2024-07-11 02:40:49.616982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:59.275 [2024-07-11 02:40:49.616996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.617004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.617012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.275 [2024-07-11 02:40:49.617023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.275 [2024-07-11 02:40:49.617047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.275 [2024-07-11 02:40:49.617149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.275 [2024-07-11 02:40:49.617165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.275 [2024-07-11 02:40:49.617172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.617180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.275 [2024-07-11 02:40:49.617193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.617201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.617209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.275 [2024-07-11 02:40:49.617221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.275 [2024-07-11 02:40:49.617251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.275 [2024-07-11 02:40:49.617356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.275 [2024-07-11 02:40:49.617371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.275 [2024-07-11 02:40:49.617379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.617387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.275 [2024-07-11 02:40:49.617396] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:34:59.275 [2024-07-11 02:40:49.617405] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:34:59.275 [2024-07-11 02:40:49.617425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.617435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.275 [2024-07-11 02:40:49.617442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.276 [2024-07-11 02:40:49.617454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.276 [2024-07-11 02:40:49.617476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.276 [2024-07-11 02:40:49.617581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.276 [2024-07-11 02:40:49.617597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.276 [2024-07-11 02:40:49.617605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.617613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.276 [2024-07-11 02:40:49.617633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.617644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.617651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.276 [2024-07-11 02:40:49.617663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.276 [2024-07-11 02:40:49.617685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.276 [2024-07-11 02:40:49.617789] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.276 [2024-07-11 02:40:49.617804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.276 [2024-07-11 02:40:49.617812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.617819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.276 [2024-07-11 02:40:49.617839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.617849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.617857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.276 [2024-07-11 02:40:49.617868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.276 [2024-07-11 02:40:49.617890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.276 [2024-07-11 02:40:49.617990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.276 [2024-07-11 02:40:49.618005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.276 [2024-07-11 02:40:49.618013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.618021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.276 [2024-07-11 02:40:49.618041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.618051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.618058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.276 [2024-07-11 02:40:49.618070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.276 [2024-07-11 02:40:49.618096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.276 [2024-07-11 02:40:49.618199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.276 [2024-07-11 02:40:49.618214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.276 [2024-07-11 02:40:49.618222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.618230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.276 [2024-07-11 02:40:49.618250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.618260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.618267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.276 [2024-07-11 02:40:49.618279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.276 [2024-07-11 02:40:49.618301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.276 [2024-07-11 02:40:49.618390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.276 [2024-07-11 02:40:49.618405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.276 [2024-07-11 02:40:49.618413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.618421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.276 [2024-07-11 02:40:49.618441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.618451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.618458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.276 [2024-07-11 02:40:49.618470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.276 [2024-07-11 02:40:49.618492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.276 [2024-07-11 02:40:49.622543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.276 [2024-07-11 02:40:49.622561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.276 [2024-07-11 02:40:49.622569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.622577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.276 [2024-07-11 02:40:49.622598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.622609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.622617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21e69f0) 00:34:59.276 [2024-07-11 02:40:49.622629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.276 [2024-07-11 02:40:49.622652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x223d580, cid 3, qid 0 00:34:59.276 [2024-07-11 02:40:49.622753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:59.276 [2024-07-11 02:40:49.622769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:59.276 [2024-07-11 02:40:49.622776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:59.276 [2024-07-11 02:40:49.622785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x223d580) on tqpair=0x21e69f0 00:34:59.276 [2024-07-11 02:40:49.622801] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:34:59.276 0% 00:34:59.276 Data Units Read: 0 00:34:59.276 Data Units Written: 0 00:34:59.276 Host Read Commands: 0 00:34:59.276 Host Write Commands: 0 00:34:59.276 Controller Busy Time: 0 minutes 00:34:59.276 Power Cycles: 0 00:34:59.276 Power On Hours: 0 hours 00:34:59.276 Unsafe Shutdowns: 0 00:34:59.276 Unrecoverable Media Errors: 0 00:34:59.276 Lifetime Error Log Entries: 0 00:34:59.276 Warning Temperature Time: 0 minutes 00:34:59.276 Critical Temperature Time: 0 minutes 00:34:59.276 00:34:59.276 Number of Queues 00:34:59.276 ================ 00:34:59.276 Number of I/O Submission Queues: 127 00:34:59.276 Number of I/O Completion Queues: 127 00:34:59.276 00:34:59.276 Active Namespaces 00:34:59.276 ================= 00:34:59.276 Namespace ID:1 00:34:59.276 Error Recovery Timeout: Unlimited 00:34:59.276 Command Set Identifier: NVM (00h) 00:34:59.276 Deallocate: Supported 00:34:59.276 Deallocated/Unwritten Error: Not Supported 00:34:59.276 Deallocated Read Value: Unknown 00:34:59.276 Deallocate in Write Zeroes: Not Supported 00:34:59.276 Deallocated Guard Field: 0xFFFF 00:34:59.276 Flush: Supported 00:34:59.276 Reservation: Supported 00:34:59.276 Namespace Sharing Capabilities: Multiple Controllers 00:34:59.276 Size (in LBAs): 131072 (0GiB) 00:34:59.276 Capacity (in LBAs): 131072 (0GiB) 00:34:59.276 Utilization (in LBAs): 131072 (0GiB) 00:34:59.276 NGUID: ABCDEF0123456789ABCDEF0123456789 00:34:59.276 EUI64: ABCDEF0123456789 00:34:59.276 UUID: 17f16cfe-9831-4321-9895-9c8cbf1185e1 00:34:59.276 Thin Provisioning: Not Supported 00:34:59.276 Per-NS Atomic Units: Yes 00:34:59.276 Atomic Boundary Size (Normal): 0 00:34:59.276 Atomic Boundary Size (PFail): 0 00:34:59.276 Atomic Boundary Offset: 0 00:34:59.276 Maximum Single Source Range Length: 65535 00:34:59.276 Maximum Copy Length: 65535 00:34:59.276 Maximum Source Range Count: 1 00:34:59.276 NGUID/EUI64 Never Reused: No 00:34:59.276 Namespace Write Protected: No 00:34:59.276 Number of LBA Formats: 1 00:34:59.276 Current LBA Format: LBA Format #00 00:34:59.276 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:59.276 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:59.277 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:59.277 rmmod nvme_tcp 00:34:59.277 rmmod nvme_fabrics 00:34:59.277 rmmod nvme_keyring 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1930893 ']' 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1930893 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1930893 ']' 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1930893 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1930893 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1930893' 00:34:59.536 killing process with pid 1930893 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1930893 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1930893 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:59.536 02:40:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.076 02:40:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:02.076 00:35:02.076 real 0m4.788s 00:35:02.076 user 0m3.872s 00:35:02.076 sys 0m1.523s 00:35:02.076 02:40:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:02.076 02:40:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:02.076 ************************************ 00:35:02.076 END TEST nvmf_identify 00:35:02.076 ************************************ 00:35:02.076 02:40:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:02.076 02:40:51 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:35:02.076 02:40:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:02.076 02:40:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:02.076 02:40:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.076 ************************************ 00:35:02.076 START TEST nvmf_perf 00:35:02.076 ************************************ 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:35:02.076 * Looking for test storage... 00:35:02.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:02.076 02:40:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:35:03.454 Found 0000:08:00.0 (0x8086 - 0x159b) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:35:03.454 Found 0000:08:00.1 (0x8086 - 0x159b) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:35:03.454 Found net devices under 0000:08:00.0: cvl_0_0 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:35:03.454 Found net devices under 0000:08:00.1: cvl_0_1 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:03.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:35:03.454 00:35:03.454 --- 10.0.0.2 ping statistics --- 00:35:03.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.454 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:35:03.454 00:35:03.454 --- 10.0.0.1 ping statistics --- 00:35:03.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.454 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:03.454 02:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1932455 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1932455 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1932455 ']' 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:03.455 02:40:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:03.455 [2024-07-11 02:40:53.859750] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:35:03.455 [2024-07-11 02:40:53.859852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.714 EAL: No free 2048 kB hugepages reported on node 1 00:35:03.714 [2024-07-11 02:40:53.928303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:03.714 [2024-07-11 02:40:54.019308] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.714 [2024-07-11 02:40:54.019370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.714 [2024-07-11 02:40:54.019387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.714 [2024-07-11 02:40:54.019400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.714 [2024-07-11 02:40:54.019412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.714 [2024-07-11 02:40:54.019496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.714 [2024-07-11 02:40:54.019547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:03.714 [2024-07-11 02:40:54.019642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:03.714 [2024-07-11 02:40:54.019646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.714 02:40:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:03.714 02:40:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:35:03.714 02:40:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:03.714 02:40:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:03.972 02:40:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:03.972 02:40:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.972 02:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:03.972 02:40:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:07.255 02:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:35:07.255 02:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:35:07.255 02:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:35:07.256 02:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:07.514 02:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:35:07.514 02:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:35:07.514 02:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:35:07.514 02:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:35:07.514 02:40:57 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:07.772 [2024-07-11 02:40:58.176429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.030 02:40:58 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:08.288 02:40:58 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:35:08.288 02:40:58 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:08.288 02:40:58 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:35:08.288 02:40:58 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:08.546 02:40:58 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:08.804 [2024-07-11 02:40:59.184193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.804 02:40:59 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:09.063 02:40:59 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:35:09.063 02:40:59 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:35:09.063 02:40:59 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:35:09.063 02:40:59 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:35:10.503 Initializing NVMe Controllers 00:35:10.503 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:35:10.503 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:35:10.503 Initialization complete. Launching workers. 00:35:10.503 ======================================================== 00:35:10.503 Latency(us) 00:35:10.503 Device Information : IOPS MiB/s Average min max 00:35:10.503 PCIE (0000:84:00.0) NSID 1 from core 0: 67015.01 261.78 476.91 55.60 4452.79 00:35:10.503 ======================================================== 00:35:10.503 Total : 67015.01 261.78 476.91 55.60 4452.79 00:35:10.503 00:35:10.503 02:41:00 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:10.503 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.877 Initializing NVMe Controllers 00:35:11.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:11.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:11.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:11.877 Initialization complete. Launching workers. 00:35:11.877 ======================================================== 00:35:11.877 Latency(us) 00:35:11.877 Device Information : IOPS MiB/s Average min max 00:35:11.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 113.00 0.44 9137.19 149.77 45029.31 00:35:11.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.00 0.20 20747.69 7923.05 47900.47 00:35:11.877 ======================================================== 00:35:11.877 Total : 163.00 0.64 12698.70 149.77 47900.47 00:35:11.877 00:35:11.877 02:41:01 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:11.877 EAL: No free 2048 kB hugepages reported on node 1 00:35:12.811 Initializing NVMe Controllers 00:35:12.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:12.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:12.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:12.811 Initialization complete. Launching workers. 00:35:12.811 ======================================================== 00:35:12.811 Latency(us) 00:35:12.811 Device Information : IOPS MiB/s Average min max 00:35:12.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7735.98 30.22 4142.74 786.77 9959.64 00:35:12.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3838.99 15.00 8387.08 5446.25 18040.61 00:35:12.811 ======================================================== 00:35:12.811 Total : 11574.97 45.21 5550.43 786.77 18040.61 00:35:12.811 00:35:13.069 02:41:03 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:35:13.069 02:41:03 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:35:13.069 02:41:03 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:13.069 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.620 Initializing NVMe Controllers 00:35:15.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:15.620 Controller IO queue size 128, less than required. 00:35:15.620 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:15.620 Controller IO queue size 128, less than required. 00:35:15.620 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:15.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:15.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:15.620 Initialization complete. Launching workers. 00:35:15.620 ======================================================== 00:35:15.620 Latency(us) 00:35:15.620 Device Information : IOPS MiB/s Average min max 00:35:15.620 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1679.76 419.94 76985.25 53167.34 109391.06 00:35:15.620 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.92 143.48 231344.96 122382.58 347454.40 00:35:15.620 ======================================================== 00:35:15.620 Total : 2253.67 563.42 116294.24 53167.34 347454.40 00:35:15.620 00:35:15.620 02:41:05 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:35:15.620 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.878 No valid NVMe controllers or AIO or URING devices found 00:35:15.878 Initializing NVMe Controllers 00:35:15.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:15.878 Controller IO queue size 128, less than required. 00:35:15.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:15.878 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:35:15.878 Controller IO queue size 128, less than required. 00:35:15.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:15.878 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:35:15.878 WARNING: Some requested NVMe devices were skipped 00:35:15.878 02:41:06 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:35:15.878 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.405 Initializing NVMe Controllers 00:35:18.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:18.405 Controller IO queue size 128, less than required. 00:35:18.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:18.405 Controller IO queue size 128, less than required. 00:35:18.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:18.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:18.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:18.405 Initialization complete. Launching workers. 00:35:18.405 00:35:18.405 ==================== 00:35:18.405 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:35:18.405 TCP transport: 00:35:18.405 polls: 18454 00:35:18.405 idle_polls: 16068 00:35:18.405 sock_completions: 2386 00:35:18.405 nvme_completions: 4817 00:35:18.405 submitted_requests: 7246 00:35:18.405 queued_requests: 1 00:35:18.405 00:35:18.405 ==================== 00:35:18.405 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:35:18.405 TCP transport: 00:35:18.405 polls: 9379 00:35:18.405 idle_polls: 6774 00:35:18.405 sock_completions: 2605 00:35:18.405 nvme_completions: 5051 00:35:18.405 submitted_requests: 7576 00:35:18.405 queued_requests: 1 00:35:18.405 ======================================================== 00:35:18.405 Latency(us) 00:35:18.405 Device Information : IOPS MiB/s Average min max 00:35:18.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1201.49 300.37 109583.19 72796.77 179771.45 00:35:18.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1259.87 314.97 102650.51 52206.08 143285.31 00:35:18.406 ======================================================== 00:35:18.406 Total : 2461.36 615.34 106034.64 52206.08 179771.45 00:35:18.406 00:35:18.406 02:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:35:18.406 02:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.684 02:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:35:18.684 02:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:84:00.0 ']' 00:35:18.684 02:41:08 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:35:21.963 02:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=93bc2cc4-4aff-4855-87f5-5f47bcea99d7 00:35:21.963 02:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 93bc2cc4-4aff-4855-87f5-5f47bcea99d7 00:35:21.964 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=93bc2cc4-4aff-4855-87f5-5f47bcea99d7 00:35:21.964 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:35:21.964 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:35:21.964 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:35:21.964 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:22.221 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:35:22.221 { 00:35:22.221 "uuid": "93bc2cc4-4aff-4855-87f5-5f47bcea99d7", 00:35:22.221 "name": "lvs_0", 00:35:22.221 "base_bdev": "Nvme0n1", 00:35:22.221 "total_data_clusters": 238234, 00:35:22.221 "free_clusters": 238234, 00:35:22.221 "block_size": 512, 00:35:22.221 "cluster_size": 4194304 00:35:22.221 } 00:35:22.221 ]' 00:35:22.221 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="93bc2cc4-4aff-4855-87f5-5f47bcea99d7") .free_clusters' 00:35:22.479 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:35:22.479 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="93bc2cc4-4aff-4855-87f5-5f47bcea99d7") .cluster_size' 00:35:22.479 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:35:22.479 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:35:22.479 02:41:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:35:22.479 952936 00:35:22.479 02:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:35:22.479 02:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:35:22.479 02:41:12 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 93bc2cc4-4aff-4855-87f5-5f47bcea99d7 lbd_0 20480 00:35:23.046 02:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=135496aa-98f0-42b7-ada6-7b29ecf8a522 00:35:23.046 02:41:13 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 135496aa-98f0-42b7-ada6-7b29ecf8a522 lvs_n_0 00:35:23.611 02:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=820839a0-fb89-4593-a719-a4a6d0617677 00:35:23.611 02:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 820839a0-fb89-4593-a719-a4a6d0617677 00:35:23.611 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=820839a0-fb89-4593-a719-a4a6d0617677 00:35:23.611 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:35:23.611 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:35:23.611 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:35:23.611 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:24.176 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:35:24.176 { 00:35:24.176 "uuid": "93bc2cc4-4aff-4855-87f5-5f47bcea99d7", 00:35:24.176 "name": "lvs_0", 00:35:24.176 "base_bdev": "Nvme0n1", 00:35:24.176 "total_data_clusters": 238234, 00:35:24.176 "free_clusters": 233114, 00:35:24.176 "block_size": 512, 00:35:24.176 "cluster_size": 4194304 00:35:24.176 }, 00:35:24.176 { 00:35:24.176 "uuid": "820839a0-fb89-4593-a719-a4a6d0617677", 00:35:24.176 "name": "lvs_n_0", 00:35:24.176 "base_bdev": "135496aa-98f0-42b7-ada6-7b29ecf8a522", 00:35:24.176 "total_data_clusters": 5114, 00:35:24.176 "free_clusters": 5114, 00:35:24.176 "block_size": 512, 00:35:24.176 "cluster_size": 4194304 00:35:24.176 } 00:35:24.176 ]' 00:35:24.176 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="820839a0-fb89-4593-a719-a4a6d0617677") .free_clusters' 00:35:24.176 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:35:24.176 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="820839a0-fb89-4593-a719-a4a6d0617677") .cluster_size' 00:35:24.176 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:35:24.176 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:35:24.176 02:41:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:35:24.176 20456 00:35:24.176 02:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:35:24.176 02:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 820839a0-fb89-4593-a719-a4a6d0617677 lbd_nest_0 20456 00:35:24.433 02:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a3b6912b-1a4c-44a0-82d0-6737311c0b30 00:35:24.433 02:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:24.691 02:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:35:24.691 02:41:14 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a3b6912b-1a4c-44a0-82d0-6737311c0b30 00:35:24.948 02:41:15 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:25.206 02:41:15 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:35:25.206 02:41:15 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:35:25.206 02:41:15 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:35:25.206 02:41:15 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:25.206 02:41:15 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:25.464 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.658 Initializing NVMe Controllers 00:35:37.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:37.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:37.658 Initialization complete. Launching workers. 00:35:37.658 ======================================================== 00:35:37.658 Latency(us) 00:35:37.658 Device Information : IOPS MiB/s Average min max 00:35:37.658 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.09 0.02 22742.17 213.97 45835.58 00:35:37.658 ======================================================== 00:35:37.658 Total : 44.09 0.02 22742.17 213.97 45835.58 00:35:37.658 00:35:37.658 02:41:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:37.658 02:41:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:37.658 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.668 Initializing NVMe Controllers 00:35:47.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:47.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:47.668 Initialization complete. Launching workers. 00:35:47.668 ======================================================== 00:35:47.668 Latency(us) 00:35:47.668 Device Information : IOPS MiB/s Average min max 00:35:47.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 59.30 7.41 16885.63 4981.30 50877.24 00:35:47.668 ======================================================== 00:35:47.668 Total : 59.30 7.41 16885.63 4981.30 50877.24 00:35:47.668 00:35:47.668 02:41:36 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:35:47.668 02:41:36 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:47.668 02:41:36 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:47.668 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.634 Initializing NVMe Controllers 00:35:57.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:57.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:57.634 Initialization complete. Launching workers. 00:35:57.634 ======================================================== 00:35:57.634 Latency(us) 00:35:57.634 Device Information : IOPS MiB/s Average min max 00:35:57.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6653.71 3.25 4809.97 354.18 12156.11 00:35:57.634 ======================================================== 00:35:57.634 Total : 6653.71 3.25 4809.97 354.18 12156.11 00:35:57.634 00:35:57.634 02:41:46 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:57.634 02:41:46 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:57.634 EAL: No free 2048 kB hugepages reported on node 1 00:36:07.599 Initializing NVMe Controllers 00:36:07.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:07.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:07.599 Initialization complete. Launching workers. 00:36:07.599 ======================================================== 00:36:07.599 Latency(us) 00:36:07.599 Device Information : IOPS MiB/s Average min max 00:36:07.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3878.40 484.80 8255.22 655.04 20201.06 00:36:07.599 ======================================================== 00:36:07.599 Total : 3878.40 484.80 8255.22 655.04 20201.06 00:36:07.599 00:36:07.599 02:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:36:07.599 02:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:36:07.599 02:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:07.599 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.562 Initializing NVMe Controllers 00:36:17.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:17.562 Controller IO queue size 128, less than required. 00:36:17.562 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:17.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:17.562 Initialization complete. Launching workers. 00:36:17.562 ======================================================== 00:36:17.562 Latency(us) 00:36:17.562 Device Information : IOPS MiB/s Average min max 00:36:17.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10252.80 5.01 12488.92 1976.97 25845.21 00:36:17.562 ======================================================== 00:36:17.562 Total : 10252.80 5.01 12488.92 1976.97 25845.21 00:36:17.562 00:36:17.562 02:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:36:17.562 02:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:17.562 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.528 Initializing NVMe Controllers 00:36:27.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:27.528 Controller IO queue size 128, less than required. 00:36:27.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:27.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:27.528 Initialization complete. Launching workers. 00:36:27.528 ======================================================== 00:36:27.528 Latency(us) 00:36:27.528 Device Information : IOPS MiB/s Average min max 00:36:27.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1201.59 150.20 107222.87 16277.04 223384.00 00:36:27.528 ======================================================== 00:36:27.528 Total : 1201.59 150.20 107222.87 16277.04 223384.00 00:36:27.528 00:36:27.528 02:42:17 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:27.806 02:42:18 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a3b6912b-1a4c-44a0-82d0-6737311c0b30 00:36:28.736 02:42:18 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:36:28.736 02:42:19 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 135496aa-98f0-42b7-ada6-7b29ecf8a522 00:36:29.303 02:42:19 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:36:29.303 02:42:19 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:36:29.303 02:42:19 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:36:29.303 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:29.303 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:36:29.303 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:29.303 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:36:29.303 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:29.303 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:29.303 rmmod nvme_tcp 00:36:29.303 rmmod nvme_fabrics 00:36:29.562 rmmod nvme_keyring 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1932455 ']' 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1932455 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1932455 ']' 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1932455 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1932455 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1932455' 00:36:29.562 killing process with pid 1932455 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1932455 00:36:29.562 02:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1932455 00:36:30.938 02:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:30.938 02:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:30.938 02:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:30.938 02:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:30.938 02:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:30.938 02:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.938 02:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:30.938 02:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.510 02:42:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:33.510 00:36:33.510 real 1m31.381s 00:36:33.510 user 5m40.270s 00:36:33.510 sys 0m14.660s 00:36:33.510 02:42:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:33.510 02:42:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:33.510 ************************************ 00:36:33.510 END TEST nvmf_perf 00:36:33.510 ************************************ 00:36:33.510 02:42:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:33.510 02:42:23 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:36:33.510 02:42:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:33.510 02:42:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:33.510 02:42:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:33.510 ************************************ 00:36:33.510 START TEST nvmf_fio_host 00:36:33.510 ************************************ 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:36:33.510 * Looking for test storage... 00:36:33.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.510 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:36:33.511 02:42:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:36:34.890 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:36:34.891 Found 0000:08:00.0 (0x8086 - 0x159b) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:36:34.891 Found 0000:08:00.1 (0x8086 - 0x159b) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:36:34.891 Found net devices under 0000:08:00.0: cvl_0_0 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:36:34.891 Found net devices under 0000:08:00.1: cvl_0_1 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:34.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:34.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:36:34.891 00:36:34.891 --- 10.0.0.2 ping statistics --- 00:36:34.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.891 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:34.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:34.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:36:34.891 00:36:34.891 --- 10.0.0.1 ping statistics --- 00:36:34.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.891 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1942469 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1942469 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1942469 ']' 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:34.891 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.151 [2024-07-11 02:42:25.334543] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:36:35.151 [2024-07-11 02:42:25.334646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:35.151 EAL: No free 2048 kB hugepages reported on node 1 00:36:35.151 [2024-07-11 02:42:25.400334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:35.151 [2024-07-11 02:42:25.491021] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:35.151 [2024-07-11 02:42:25.491081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:35.151 [2024-07-11 02:42:25.491098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:35.151 [2024-07-11 02:42:25.491111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:35.151 [2024-07-11 02:42:25.491123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:35.151 [2024-07-11 02:42:25.491230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.151 [2024-07-11 02:42:25.491280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:35.151 [2024-07-11 02:42:25.491330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:35.151 [2024-07-11 02:42:25.491333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.409 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:35.409 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:36:35.409 02:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:35.668 [2024-07-11 02:42:25.886968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.668 02:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:36:35.668 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:35.668 02:42:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.668 02:42:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:36:35.927 Malloc1 00:36:35.927 02:42:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:36.185 02:42:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:36.444 02:42:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:36.702 [2024-07-11 02:42:26.999054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:36.702 02:42:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:36.961 02:42:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:37.221 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:37.221 fio-3.35 00:36:37.221 Starting 1 thread 00:36:37.221 EAL: No free 2048 kB hugepages reported on node 1 00:36:39.755 00:36:39.755 test: (groupid=0, jobs=1): err= 0: pid=1942810: Thu Jul 11 02:42:29 2024 00:36:39.755 read: IOPS=7792, BW=30.4MiB/s (31.9MB/s)(61.1MiB/2007msec) 00:36:39.755 slat (usec): min=2, max=214, avg= 2.82, stdev= 2.56 00:36:39.755 clat (usec): min=2982, max=15470, avg=8962.94, stdev=753.79 00:36:39.755 lat (usec): min=3023, max=15472, avg=8965.76, stdev=753.58 00:36:39.755 clat percentiles (usec): 00:36:39.755 | 1.00th=[ 7177], 5.00th=[ 7767], 10.00th=[ 8094], 20.00th=[ 8356], 00:36:39.755 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:36:39.755 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:36:39.755 | 99.00th=[10552], 99.50th=[10814], 99.90th=[13829], 99.95th=[14091], 00:36:39.755 | 99.99th=[15270] 00:36:39.755 bw ( KiB/s): min=29752, max=31800, per=99.89%, avg=31136.00, stdev=941.10, samples=4 00:36:39.755 iops : min= 7438, max= 7950, avg=7784.00, stdev=235.28, samples=4 00:36:39.755 write: IOPS=7774, BW=30.4MiB/s (31.8MB/s)(60.9MiB/2007msec); 0 zone resets 00:36:39.755 slat (usec): min=2, max=194, avg= 2.99, stdev= 1.87 00:36:39.755 clat (usec): min=2201, max=14046, avg=7374.48, stdev=617.69 00:36:39.755 lat (usec): min=2215, max=14048, avg=7377.48, stdev=617.59 00:36:39.755 clat percentiles (usec): 00:36:39.755 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:36:39.755 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:36:39.755 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:36:39.755 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[12125], 99.95th=[13173], 00:36:39.755 | 99.99th=[13960] 00:36:39.755 bw ( KiB/s): min=30928, max=31368, per=99.99%, avg=31094.00, stdev=201.05, samples=4 00:36:39.755 iops : min= 7732, max= 7842, avg=7773.50, stdev=50.26, samples=4 00:36:39.755 lat (msec) : 4=0.09%, 10=96.55%, 20=3.36% 00:36:39.755 cpu : usr=68.39%, sys=29.91%, ctx=102, majf=0, minf=32 00:36:39.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:36:39.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:39.755 issued rwts: total=15639,15603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:39.755 00:36:39.755 Run status group 0 (all jobs): 00:36:39.755 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=61.1MiB (64.1MB), run=2007-2007msec 00:36:39.755 WRITE: bw=30.4MiB/s (31.8MB/s), 30.4MiB/s-30.4MiB/s (31.8MB/s-31.8MB/s), io=60.9MiB (63.9MB), run=2007-2007msec 00:36:39.755 02:42:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:36:39.755 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:36:39.756 02:42:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:39.756 02:42:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:36:40.014 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:36:40.014 fio-3.35 00:36:40.014 Starting 1 thread 00:36:40.014 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.543 00:36:42.543 test: (groupid=0, jobs=1): err= 0: pid=1943088: Thu Jul 11 02:42:32 2024 00:36:42.543 read: IOPS=7587, BW=119MiB/s (124MB/s)(238MiB/2011msec) 00:36:42.543 slat (usec): min=3, max=124, avg= 4.35, stdev= 1.73 00:36:42.543 clat (usec): min=2735, max=18713, avg=9821.41, stdev=2305.57 00:36:42.543 lat (usec): min=2740, max=18717, avg=9825.75, stdev=2305.64 00:36:42.543 clat percentiles (usec): 00:36:42.543 | 1.00th=[ 5080], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7832], 00:36:42.543 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10290], 00:36:42.543 | 70.00th=[11076], 80.00th=[11863], 90.00th=[12649], 95.00th=[13698], 00:36:42.543 | 99.00th=[16057], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 00:36:42.543 | 99.99th=[17957] 00:36:42.543 bw ( KiB/s): min=53184, max=66144, per=50.69%, avg=61536.00, stdev=6099.23, samples=4 00:36:42.543 iops : min= 3324, max= 4134, avg=3846.00, stdev=381.20, samples=4 00:36:42.543 write: IOPS=4360, BW=68.1MiB/s (71.4MB/s)(126MiB/1843msec); 0 zone resets 00:36:42.543 slat (usec): min=32, max=203, avg=38.29, stdev= 6.45 00:36:42.543 clat (usec): min=7175, max=23495, avg=12786.75, stdev=2217.35 00:36:42.543 lat (usec): min=7208, max=23527, avg=12825.04, stdev=2217.82 00:36:42.543 clat percentiles (usec): 00:36:42.543 | 1.00th=[ 8291], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10814], 00:36:42.543 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12780], 60.00th=[13304], 00:36:42.543 | 70.00th=[13829], 80.00th=[14615], 90.00th=[15795], 95.00th=[16581], 00:36:42.543 | 99.00th=[18482], 99.50th=[19530], 99.90th=[22152], 99.95th=[22414], 00:36:42.543 | 99.99th=[23462] 00:36:42.543 bw ( KiB/s): min=54880, max=69088, per=91.54%, avg=63872.00, stdev=6665.53, samples=4 00:36:42.543 iops : min= 3430, max= 4318, avg=3992.00, stdev=416.60, samples=4 00:36:42.543 lat (msec) : 4=0.11%, 10=39.49%, 20=60.27%, 50=0.12% 00:36:42.543 cpu : usr=80.25%, sys=18.51%, ctx=41, majf=0, minf=52 00:36:42.543 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:36:42.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:42.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:42.543 issued rwts: total=15259,8037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:42.543 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:42.543 00:36:42.543 Run status group 0 (all jobs): 00:36:42.543 READ: bw=119MiB/s (124MB/s), 119MiB/s-119MiB/s (124MB/s-124MB/s), io=238MiB (250MB), run=2011-2011msec 00:36:42.543 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=126MiB (132MB), run=1843-1843msec 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:36:42.543 02:42:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 -i 10.0.0.2 00:36:45.824 Nvme0n1 00:36:45.824 02:42:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:36:49.104 02:42:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=55ad4dae-4357-496f-ada9-98d8de784dad 00:36:49.104 02:42:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 55ad4dae-4357-496f-ada9-98d8de784dad 00:36:49.104 02:42:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=55ad4dae-4357-496f-ada9-98d8de784dad 00:36:49.104 02:42:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:36:49.104 02:42:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:36:49.104 02:42:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:36:49.105 02:42:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:49.105 02:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:36:49.105 { 00:36:49.105 "uuid": "55ad4dae-4357-496f-ada9-98d8de784dad", 00:36:49.105 "name": "lvs_0", 00:36:49.105 "base_bdev": "Nvme0n1", 00:36:49.105 "total_data_clusters": 930, 00:36:49.105 "free_clusters": 930, 00:36:49.105 "block_size": 512, 00:36:49.105 "cluster_size": 1073741824 00:36:49.105 } 00:36:49.105 ]' 00:36:49.105 02:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="55ad4dae-4357-496f-ada9-98d8de784dad") .free_clusters' 00:36:49.105 02:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:36:49.105 02:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="55ad4dae-4357-496f-ada9-98d8de784dad") .cluster_size' 00:36:49.105 02:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:36:49.105 02:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:36:49.105 02:42:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:36:49.105 952320 00:36:49.105 02:42:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:36:49.362 09521838-b1d4-4c36-bea0-42d5e8e569fe 00:36:49.362 02:42:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:36:49.619 02:42:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:36:49.877 02:42:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:36:50.136 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:50.394 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:50.394 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:50.394 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:50.394 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:50.394 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:50.394 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:50.394 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:50.394 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:50.394 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:50.395 02:42:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:50.395 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:50.395 fio-3.35 00:36:50.395 Starting 1 thread 00:36:50.395 EAL: No free 2048 kB hugepages reported on node 1 00:36:52.920 00:36:52.920 test: (groupid=0, jobs=1): err= 0: pid=1944153: Thu Jul 11 02:42:43 2024 00:36:52.920 read: IOPS=5323, BW=20.8MiB/s (21.8MB/s)(41.8MiB/2009msec) 00:36:52.920 slat (usec): min=2, max=203, avg= 2.82, stdev= 2.51 00:36:52.920 clat (usec): min=1432, max=171608, avg=13112.36, stdev=12230.24 00:36:52.921 lat (usec): min=1435, max=171672, avg=13115.18, stdev=12230.73 00:36:52.921 clat percentiles (msec): 00:36:52.921 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:36:52.921 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:36:52.921 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 14], 00:36:52.921 | 99.00th=[ 16], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:36:52.921 | 99.99th=[ 171] 00:36:52.921 bw ( KiB/s): min=15264, max=23256, per=99.73%, avg=21234.00, stdev=3980.20, samples=4 00:36:52.921 iops : min= 3816, max= 5814, avg=5308.50, stdev=995.05, samples=4 00:36:52.921 write: IOPS=5307, BW=20.7MiB/s (21.7MB/s)(41.7MiB/2009msec); 0 zone resets 00:36:52.921 slat (usec): min=2, max=142, avg= 2.93, stdev= 1.55 00:36:52.921 clat (usec): min=366, max=169345, avg=10830.75, stdev=11479.61 00:36:52.921 lat (usec): min=370, max=169353, avg=10833.68, stdev=11480.03 00:36:52.921 clat percentiles (msec): 00:36:52.921 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:36:52.921 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:36:52.921 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:36:52.921 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:36:52.921 | 99.99th=[ 169] 00:36:52.921 bw ( KiB/s): min=16104, max=23104, per=99.98%, avg=21226.00, stdev=3417.20, samples=4 00:36:52.921 iops : min= 4026, max= 5776, avg=5306.50, stdev=854.30, samples=4 00:36:52.921 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:36:52.921 lat (msec) : 2=0.03%, 4=0.09%, 10=25.90%, 20=73.33%, 50=0.02% 00:36:52.921 lat (msec) : 250=0.60% 00:36:52.921 cpu : usr=64.59%, sys=33.96%, ctx=82, majf=0, minf=32 00:36:52.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:36:52.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:52.921 issued rwts: total=10694,10663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:52.921 00:36:52.921 Run status group 0 (all jobs): 00:36:52.921 READ: bw=20.8MiB/s (21.8MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=41.8MiB (43.8MB), run=2009-2009msec 00:36:52.921 WRITE: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=41.7MiB (43.7MB), run=2009-2009msec 00:36:52.921 02:42:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:53.179 02:42:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=4d590d5e-f929-44ea-8719-42813a1749a2 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 4d590d5e-f929-44ea-8719-42813a1749a2 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=4d590d5e-f929-44ea-8719-42813a1749a2 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:36:54.548 { 00:36:54.548 "uuid": "55ad4dae-4357-496f-ada9-98d8de784dad", 00:36:54.548 "name": "lvs_0", 00:36:54.548 "base_bdev": "Nvme0n1", 00:36:54.548 "total_data_clusters": 930, 00:36:54.548 "free_clusters": 0, 00:36:54.548 "block_size": 512, 00:36:54.548 "cluster_size": 1073741824 00:36:54.548 }, 00:36:54.548 { 00:36:54.548 "uuid": "4d590d5e-f929-44ea-8719-42813a1749a2", 00:36:54.548 "name": "lvs_n_0", 00:36:54.548 "base_bdev": "09521838-b1d4-4c36-bea0-42d5e8e569fe", 00:36:54.548 "total_data_clusters": 237847, 00:36:54.548 "free_clusters": 237847, 00:36:54.548 "block_size": 512, 00:36:54.548 "cluster_size": 4194304 00:36:54.548 } 00:36:54.548 ]' 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4d590d5e-f929-44ea-8719-42813a1749a2") .free_clusters' 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:36:54.548 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4d590d5e-f929-44ea-8719-42813a1749a2") .cluster_size' 00:36:54.805 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:36:54.805 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:36:54.805 02:42:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:36:54.805 951388 00:36:54.805 02:42:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:36:55.370 aea0cac4-53dc-4ef4-844c-d2d0435c1167 00:36:55.370 02:42:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:36:55.935 02:42:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:36:56.193 02:42:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:36:56.450 02:42:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:56.450 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:56.450 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:56.450 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:56.450 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:56.451 02:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:56.712 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:56.712 fio-3.35 00:36:56.712 Starting 1 thread 00:36:56.712 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.271 00:36:59.271 test: (groupid=0, jobs=1): err= 0: pid=1944718: Thu Jul 11 02:42:49 2024 00:36:59.271 read: IOPS=5080, BW=19.8MiB/s (20.8MB/s)(39.9MiB/2010msec) 00:36:59.271 slat (usec): min=2, max=170, avg= 2.93, stdev= 2.19 00:36:59.271 clat (usec): min=4684, max=22781, avg=13785.24, stdev=1252.53 00:36:59.271 lat (usec): min=4690, max=22784, avg=13788.17, stdev=1252.33 00:36:59.271 clat percentiles (usec): 00:36:59.271 | 1.00th=[10945], 5.00th=[11863], 10.00th=[12256], 20.00th=[12780], 00:36:59.271 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:36:59.271 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:36:59.271 | 99.00th=[16450], 99.50th=[16712], 99.90th=[21627], 99.95th=[21890], 00:36:59.271 | 99.99th=[22676] 00:36:59.271 bw ( KiB/s): min=19312, max=20720, per=99.72%, avg=20264.00, stdev=642.23, samples=4 00:36:59.271 iops : min= 4828, max= 5180, avg=5066.00, stdev=160.56, samples=4 00:36:59.271 write: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(39.8MiB/2010msec); 0 zone resets 00:36:59.271 slat (usec): min=2, max=117, avg= 3.06, stdev= 1.36 00:36:59.271 clat (usec): min=2227, max=20460, avg=11295.56, stdev=1043.02 00:36:59.271 lat (usec): min=2247, max=20462, avg=11298.62, stdev=1042.96 00:36:59.271 clat percentiles (usec): 00:36:59.271 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:36:59.271 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:36:59.271 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:36:59.271 | 99.00th=[13435], 99.50th=[13698], 99.90th=[18482], 99.95th=[20317], 00:36:59.271 | 99.99th=[20317] 00:36:59.271 bw ( KiB/s): min=20160, max=20392, per=100.00%, avg=20258.00, stdev=97.19, samples=4 00:36:59.271 iops : min= 5040, max= 5098, avg=5064.50, stdev=24.30, samples=4 00:36:59.271 lat (msec) : 4=0.04%, 10=4.03%, 20=95.77%, 50=0.16% 00:36:59.271 cpu : usr=66.90%, sys=31.61%, ctx=96, majf=0, minf=32 00:36:59.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:36:59.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:59.271 issued rwts: total=10211,10177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:59.271 00:36:59.271 Run status group 0 (all jobs): 00:36:59.271 READ: bw=19.8MiB/s (20.8MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=39.9MiB (41.8MB), run=2010-2010msec 00:36:59.271 WRITE: bw=19.8MiB/s (20.7MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.7MB/s), io=39.8MiB (41.7MB), run=2010-2010msec 00:36:59.271 02:42:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:36:59.271 02:42:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:36:59.271 02:42:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:37:03.455 02:42:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:37:03.455 02:42:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:37:06.734 02:42:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:37:06.734 02:42:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:37:08.635 02:42:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:08.635 02:42:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:37:08.635 02:42:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:37:08.635 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:08.636 rmmod nvme_tcp 00:37:08.636 rmmod nvme_fabrics 00:37:08.636 rmmod nvme_keyring 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1942469 ']' 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1942469 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1942469 ']' 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1942469 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:08.636 02:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1942469 00:37:08.636 02:42:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:08.636 02:42:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:08.636 02:42:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1942469' 00:37:08.636 killing process with pid 1942469 00:37:08.636 02:42:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1942469 00:37:08.636 02:42:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1942469 00:37:08.894 02:42:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:08.894 02:42:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:08.894 02:42:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:08.894 02:42:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:08.894 02:42:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:08.894 02:42:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.894 02:42:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:08.894 02:42:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.802 02:43:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:11.061 00:37:11.061 real 0m37.778s 00:37:11.061 user 2m26.583s 00:37:11.061 sys 0m6.433s 00:37:11.061 02:43:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:11.061 02:43:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.061 ************************************ 00:37:11.061 END TEST nvmf_fio_host 00:37:11.061 ************************************ 00:37:11.061 02:43:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:11.061 02:43:01 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:37:11.061 02:43:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:11.061 02:43:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:11.061 02:43:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:11.061 ************************************ 00:37:11.061 START TEST nvmf_failover 00:37:11.061 ************************************ 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:37:11.061 * Looking for test storage... 00:37:11.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:37:11.061 02:43:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:37:12.962 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:37:12.963 Found 0000:08:00.0 (0x8086 - 0x159b) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:37:12.963 Found 0000:08:00.1 (0x8086 - 0x159b) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:37:12.963 Found net devices under 0000:08:00.0: cvl_0_0 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:37:12.963 Found net devices under 0000:08:00.1: cvl_0_1 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:12.963 02:43:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:12.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:12.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:37:12.963 00:37:12.963 --- 10.0.0.2 ping statistics --- 00:37:12.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.963 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:12.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:12.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:37:12.963 00:37:12.963 --- 10.0.0.1 ping statistics --- 00:37:12.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.963 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1947299 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1947299 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1947299 ']' 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:12.963 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:12.963 [2024-07-11 02:43:03.171364] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:37:12.963 [2024-07-11 02:43:03.171458] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.963 EAL: No free 2048 kB hugepages reported on node 1 00:37:12.963 [2024-07-11 02:43:03.236315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:12.963 [2024-07-11 02:43:03.323187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.963 [2024-07-11 02:43:03.323242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.963 [2024-07-11 02:43:03.323264] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.963 [2024-07-11 02:43:03.323278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.963 [2024-07-11 02:43:03.323290] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.963 [2024-07-11 02:43:03.323378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:12.963 [2024-07-11 02:43:03.323430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:12.963 [2024-07-11 02:43:03.323433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:13.221 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:13.221 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:37:13.221 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:13.221 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:13.221 02:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:13.221 02:43:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.221 02:43:03 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:13.479 [2024-07-11 02:43:03.732570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:13.479 02:43:03 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:13.736 Malloc0 00:37:13.736 02:43:04 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:13.993 02:43:04 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:14.558 02:43:04 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:14.558 [2024-07-11 02:43:04.947006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:14.558 02:43:04 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:15.122 [2024-07-11 02:43:05.243937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:15.122 02:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:37:15.122 [2024-07-11 02:43:05.540882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1947521 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1947521 /var/tmp/bdevperf.sock 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1947521 ']' 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:15.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:15.379 02:43:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:15.635 02:43:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:15.635 02:43:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:37:15.635 02:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:15.892 NVMe0n1 00:37:15.892 02:43:06 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:16.149 00:37:16.406 02:43:06 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1947626 00:37:16.406 02:43:06 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:16.406 02:43:06 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:37:17.338 02:43:07 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:17.606 [2024-07-11 02:43:07.870469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.606 [2024-07-11 02:43:07.870913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.870926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.870940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.870954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.870967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.870981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.870995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 [2024-07-11 02:43:07.871658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52df50 is same with the state(5) to be set 00:37:17.607 02:43:07 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:37:20.887 02:43:10 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:20.887 00:37:20.887 02:43:11 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:21.145 [2024-07-11 02:43:11.496784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.145 [2024-07-11 02:43:11.496994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 [2024-07-11 02:43:11.497270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ed00 is same with the state(5) to be set 00:37:21.146 02:43:11 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:37:24.523 02:43:14 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:24.523 [2024-07-11 02:43:14.796870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.523 02:43:14 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:37:25.457 02:43:15 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:37:25.715 02:43:16 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1947626 00:37:32.280 0 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1947521 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1947521 ']' 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1947521 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1947521 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1947521' 00:37:32.280 killing process with pid 1947521 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1947521 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1947521 00:37:32.280 02:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:32.280 [2024-07-11 02:43:05.607932] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:37:32.280 [2024-07-11 02:43:05.608047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1947521 ] 00:37:32.280 EAL: No free 2048 kB hugepages reported on node 1 00:37:32.280 [2024-07-11 02:43:05.668132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.280 [2024-07-11 02:43:05.755468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.280 Running I/O for 15 seconds... 00:37:32.280 [2024-07-11 02:43:07.873256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.280 [2024-07-11 02:43:07.873717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.280 [2024-07-11 02:43:07.873732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.873749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.873765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.873782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.873797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.873814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.281 [2024-07-11 02:43:07.873830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.873847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.281 [2024-07-11 02:43:07.873862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.873879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.873895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.873912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.873928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.873945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.873961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.873978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.873993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.281 [2024-07-11 02:43:07.874686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.281 [2024-07-11 02:43:07.874702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.874719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.874734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.874751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.874767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.874784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.874799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.874816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.874831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.874848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.874864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.874887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.874902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.874920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.874939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.874957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.874972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.874989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.282 [2024-07-11 02:43:07.875431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.282 [2024-07-11 02:43:07.875464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.282 [2024-07-11 02:43:07.875498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.282 [2024-07-11 02:43:07.875538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.282 [2024-07-11 02:43:07.875577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.282 [2024-07-11 02:43:07.875610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.282 [2024-07-11 02:43:07.875643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.282 [2024-07-11 02:43:07.875660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.875977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.875995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.283 [2024-07-11 02:43:07.876608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.283 [2024-07-11 02:43:07.876629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.876971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.876988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.877003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.284 [2024-07-11 02:43:07.877035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71040 len:8 PRP1 0x0 PRP2 0x0 00:37:32.284 [2024-07-11 02:43:07.877103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.284 [2024-07-11 02:43:07.877136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71048 len:8 PRP1 0x0 PRP2 0x0 00:37:32.284 [2024-07-11 02:43:07.877163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.284 [2024-07-11 02:43:07.877197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71056 len:8 PRP1 0x0 PRP2 0x0 00:37:32.284 [2024-07-11 02:43:07.877225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.284 [2024-07-11 02:43:07.877253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71064 len:8 PRP1 0x0 PRP2 0x0 00:37:32.284 [2024-07-11 02:43:07.877280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.284 [2024-07-11 02:43:07.877307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71072 len:8 PRP1 0x0 PRP2 0x0 00:37:32.284 [2024-07-11 02:43:07.877335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.284 [2024-07-11 02:43:07.877362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71080 len:8 PRP1 0x0 PRP2 0x0 00:37:32.284 [2024-07-11 02:43:07.877389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.284 [2024-07-11 02:43:07.877416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71088 len:8 PRP1 0x0 PRP2 0x0 00:37:32.284 [2024-07-11 02:43:07.877443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.284 [2024-07-11 02:43:07.877470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71096 len:8 PRP1 0x0 PRP2 0x0 00:37:32.284 [2024-07-11 02:43:07.877501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.284 [2024-07-11 02:43:07.877538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71104 len:8 PRP1 0x0 PRP2 0x0 00:37:32.284 [2024-07-11 02:43:07.877565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.284 [2024-07-11 02:43:07.877580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.284 [2024-07-11 02:43:07.877592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.284 [2024-07-11 02:43:07.877605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71112 len:8 PRP1 0x0 PRP2 0x0 00:37:32.285 [2024-07-11 02:43:07.877619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.877634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.285 [2024-07-11 02:43:07.877647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.285 [2024-07-11 02:43:07.877660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71120 len:8 PRP1 0x0 PRP2 0x0 00:37:32.285 [2024-07-11 02:43:07.877674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.877689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.285 [2024-07-11 02:43:07.877702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.285 [2024-07-11 02:43:07.877714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71128 len:8 PRP1 0x0 PRP2 0x0 00:37:32.285 [2024-07-11 02:43:07.877729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.877744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.285 [2024-07-11 02:43:07.877756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.285 [2024-07-11 02:43:07.877769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71136 len:8 PRP1 0x0 PRP2 0x0 00:37:32.285 [2024-07-11 02:43:07.877783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.877798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.285 [2024-07-11 02:43:07.877810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.285 [2024-07-11 02:43:07.877823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71144 len:8 PRP1 0x0 PRP2 0x0 00:37:32.285 [2024-07-11 02:43:07.877837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.877853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.285 [2024-07-11 02:43:07.877865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.285 [2024-07-11 02:43:07.877878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71152 len:8 PRP1 0x0 PRP2 0x0 00:37:32.285 [2024-07-11 02:43:07.877892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.877907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.285 [2024-07-11 02:43:07.877919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.285 [2024-07-11 02:43:07.877935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71160 len:8 PRP1 0x0 PRP2 0x0 00:37:32.285 [2024-07-11 02:43:07.877950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.878007] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf08660 was disconnected and freed. reset controller. 00:37:32.285 [2024-07-11 02:43:07.878030] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:37:32.285 [2024-07-11 02:43:07.878068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.285 [2024-07-11 02:43:07.878087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.878104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.285 [2024-07-11 02:43:07.878119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.878134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.285 [2024-07-11 02:43:07.878149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.878164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.285 [2024-07-11 02:43:07.878179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:07.878202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:32.285 [2024-07-11 02:43:07.882281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:32.285 [2024-07-11 02:43:07.882324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeea440 (9): Bad file descriptor 00:37:32.285 [2024-07-11 02:43:07.961274] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:32.285 [2024-07-11 02:43:11.499110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.285 [2024-07-11 02:43:11.499492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.285 [2024-07-11 02:43:11.499507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.499970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.499985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.500003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.500018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.500036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.500052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.500070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.500085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.500103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.500119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.500136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.500152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.500170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.286 [2024-07-11 02:43:11.500189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.286 [2024-07-11 02:43:11.500208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.287 [2024-07-11 02:43:11.500223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.500971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.500986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.501004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.501020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.501045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.501061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.501078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.501093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.501110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.501126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.501143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.501158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.287 [2024-07-11 02:43:11.501176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.287 [2024-07-11 02:43:11.501191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.288 [2024-07-11 02:43:11.501825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.288 [2024-07-11 02:43:11.501881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43384 len:8 PRP1 0x0 PRP2 0x0 00:37:32.288 [2024-07-11 02:43:11.501900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.288 [2024-07-11 02:43:11.501933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.288 [2024-07-11 02:43:11.501946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43392 len:8 PRP1 0x0 PRP2 0x0 00:37:32.288 [2024-07-11 02:43:11.501960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.501975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.288 [2024-07-11 02:43:11.501988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.288 [2024-07-11 02:43:11.502001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43400 len:8 PRP1 0x0 PRP2 0x0 00:37:32.288 [2024-07-11 02:43:11.502015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.502030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.288 [2024-07-11 02:43:11.502042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.288 [2024-07-11 02:43:11.502056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43408 len:8 PRP1 0x0 PRP2 0x0 00:37:32.288 [2024-07-11 02:43:11.502071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.502086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.288 [2024-07-11 02:43:11.502098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.288 [2024-07-11 02:43:11.502111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43416 len:8 PRP1 0x0 PRP2 0x0 00:37:32.288 [2024-07-11 02:43:11.502125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.502141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.288 [2024-07-11 02:43:11.502153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.288 [2024-07-11 02:43:11.502167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43424 len:8 PRP1 0x0 PRP2 0x0 00:37:32.288 [2024-07-11 02:43:11.502181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.288 [2024-07-11 02:43:11.502196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43432 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43440 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43448 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43464 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43472 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43480 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43488 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43496 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43504 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43512 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43520 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43528 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43536 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.502958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.502973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.502985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.502998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43544 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.503012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.503027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.503040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.503053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43552 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.503067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.503082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.503095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.503107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43560 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.503122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.289 [2024-07-11 02:43:11.503136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.289 [2024-07-11 02:43:11.503149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.289 [2024-07-11 02:43:11.503162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43568 len:8 PRP1 0x0 PRP2 0x0 00:37:32.289 [2024-07-11 02:43:11.503176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43576 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43584 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43592 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43600 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43608 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43616 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43624 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43632 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43640 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43648 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43656 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43664 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43672 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43680 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.503959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.503974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.503986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.503999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43688 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.504014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.504029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.504041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.504057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43696 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.504071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.504086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.504099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.504111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43704 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.504126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.504140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.504153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.504165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43712 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.504180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.504195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.504207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.504219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43720 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.504234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.504249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.504261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.504274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43728 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.504288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.504303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.504315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.504328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43736 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.504342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.290 [2024-07-11 02:43:11.504356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.290 [2024-07-11 02:43:11.504369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.290 [2024-07-11 02:43:11.504388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43744 len:8 PRP1 0x0 PRP2 0x0 00:37:32.290 [2024-07-11 02:43:11.504402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:11.504417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.291 [2024-07-11 02:43:11.504429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.291 [2024-07-11 02:43:11.504442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42984 len:8 PRP1 0x0 PRP2 0x0 00:37:32.291 [2024-07-11 02:43:11.504456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:11.504475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.291 [2024-07-11 02:43:11.504487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.291 [2024-07-11 02:43:11.504500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42992 len:8 PRP1 0x0 PRP2 0x0 00:37:32.291 [2024-07-11 02:43:11.504520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:11.504579] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf0a550 was disconnected and freed. reset controller. 00:37:32.291 [2024-07-11 02:43:11.504602] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:37:32.291 [2024-07-11 02:43:11.504642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.291 [2024-07-11 02:43:11.504661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:11.504678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.291 [2024-07-11 02:43:11.504693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:11.504708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.291 [2024-07-11 02:43:11.504723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:11.504738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.291 [2024-07-11 02:43:11.504753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:11.504767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:32.291 [2024-07-11 02:43:11.504827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeea440 (9): Bad file descriptor 00:37:32.291 [2024-07-11 02:43:11.508881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:32.291 [2024-07-11 02:43:11.670404] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:32.291 [2024-07-11 02:43:16.059315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-07-11 02:43:16.059378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.059963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.059983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.060017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.060050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.060083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.060118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.060151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.060184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.060217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.060251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.291 [2024-07-11 02:43:16.060285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.291 [2024-07-11 02:43:16.060302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.292 [2024-07-11 02:43:16.060792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.060828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.060862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.060896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.060929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.060962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.060979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.060995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.292 [2024-07-11 02:43:16.061609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-07-11 02:43:16.061624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.061657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.061694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.061727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.061760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.061793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.061825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.061857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.061890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.061923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.061956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.061973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.061988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:32.293 [2024-07-11 02:43:16.062896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.062929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.062966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.062983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.062998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.063016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-07-11 02:43:16.063031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.293 [2024-07-11 02:43:16.063048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.294 [2024-07-11 02:43:16.063624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:32.294 [2024-07-11 02:43:16.063675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:32.294 [2024-07-11 02:43:16.063688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84440 len:8 PRP1 0x0 PRP2 0x0 00:37:32.294 [2024-07-11 02:43:16.063703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063761] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf0e3b0 was disconnected and freed. reset controller. 00:37:32.294 [2024-07-11 02:43:16.063784] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:37:32.294 [2024-07-11 02:43:16.063821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.294 [2024-07-11 02:43:16.063840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.294 [2024-07-11 02:43:16.063876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.294 [2024-07-11 02:43:16.063906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:32.294 [2024-07-11 02:43:16.063937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:32.294 [2024-07-11 02:43:16.063952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:32.294 [2024-07-11 02:43:16.068013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:32.294 [2024-07-11 02:43:16.068055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeea440 (9): Bad file descriptor 00:37:32.294 [2024-07-11 02:43:16.102803] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:32.294 00:37:32.294 Latency(us) 00:37:32.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.294 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:32.294 Verification LBA range: start 0x0 length 0x4000 00:37:32.294 NVMe0n1 : 15.01 7346.70 28.70 570.61 0.00 16133.48 649.29 17767.54 00:37:32.294 =================================================================================================================== 00:37:32.294 Total : 7346.70 28.70 570.61 0.00 16133.48 649.29 17767.54 00:37:32.294 Received shutdown signal, test time was about 15.000000 seconds 00:37:32.294 00:37:32.294 Latency(us) 00:37:32.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.294 =================================================================================================================== 00:37:32.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1949014 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1949014 /var/tmp/bdevperf.sock 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1949014 ']' 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:32.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:32.294 02:43:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:32.294 02:43:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:32.294 02:43:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:37:32.294 02:43:22 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:32.294 [2024-07-11 02:43:22.537656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:32.294 02:43:22 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:37:32.552 [2024-07-11 02:43:22.834662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:37:32.552 02:43:22 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:33.118 NVMe0n1 00:37:33.118 02:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:33.377 00:37:33.377 02:43:23 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:33.941 00:37:33.941 02:43:24 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:33.941 02:43:24 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:37:34.199 02:43:24 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:34.764 02:43:24 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:37:38.045 02:43:27 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:38.045 02:43:27 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:37:38.045 02:43:28 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1949620 00:37:38.045 02:43:28 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:38.045 02:43:28 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1949620 00:37:38.982 0 00:37:38.982 02:43:29 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:38.982 [2024-07-11 02:43:22.006378] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:37:38.982 [2024-07-11 02:43:22.006487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949014 ] 00:37:38.982 EAL: No free 2048 kB hugepages reported on node 1 00:37:38.982 [2024-07-11 02:43:22.066537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.982 [2024-07-11 02:43:22.153145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.982 [2024-07-11 02:43:24.872021] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:37:38.982 [2024-07-11 02:43:24.872105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:38.982 [2024-07-11 02:43:24.872128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.982 [2024-07-11 02:43:24.872148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:38.982 [2024-07-11 02:43:24.872163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.982 [2024-07-11 02:43:24.872178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:38.982 [2024-07-11 02:43:24.872193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.982 [2024-07-11 02:43:24.872244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:38.982 [2024-07-11 02:43:24.872261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.982 [2024-07-11 02:43:24.872276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:38.982 [2024-07-11 02:43:24.872327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:38.982 [2024-07-11 02:43:24.872361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a440 (9): Bad file descriptor 00:37:38.982 [2024-07-11 02:43:24.918228] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:38.982 Running I/O for 1 seconds... 00:37:38.982 00:37:38.982 Latency(us) 00:37:38.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.982 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:38.982 Verification LBA range: start 0x0 length 0x4000 00:37:38.982 NVMe0n1 : 1.01 7634.73 29.82 0.00 0.00 16688.46 3640.89 15437.37 00:37:38.982 =================================================================================================================== 00:37:38.982 Total : 7634.73 29.82 0.00 0.00 16688.46 3640.89 15437.37 00:37:38.982 02:43:29 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:38.982 02:43:29 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:37:39.240 02:43:29 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:39.806 02:43:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:39.806 02:43:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:37:40.064 02:43:30 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:40.322 02:43:30 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1949014 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1949014 ']' 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1949014 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1949014 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1949014' 00:37:43.602 killing process with pid 1949014 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1949014 00:37:43.602 02:43:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1949014 00:37:43.859 02:43:34 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:37:43.859 02:43:34 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:44.117 rmmod nvme_tcp 00:37:44.117 rmmod nvme_fabrics 00:37:44.117 rmmod nvme_keyring 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:37:44.117 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1947299 ']' 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1947299 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1947299 ']' 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1947299 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1947299 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1947299' 00:37:44.118 killing process with pid 1947299 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1947299 00:37:44.118 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1947299 00:37:44.377 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:44.377 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:44.377 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:44.377 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:44.377 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:44.377 02:43:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.377 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:44.377 02:43:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.284 02:43:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:46.284 00:37:46.284 real 0m35.365s 00:37:46.284 user 2m6.040s 00:37:46.284 sys 0m6.001s 00:37:46.284 02:43:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:46.284 02:43:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:46.284 ************************************ 00:37:46.284 END TEST nvmf_failover 00:37:46.284 ************************************ 00:37:46.284 02:43:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:46.284 02:43:36 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:37:46.284 02:43:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:46.284 02:43:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:46.284 02:43:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:46.284 ************************************ 00:37:46.284 START TEST nvmf_host_discovery 00:37:46.284 ************************************ 00:37:46.285 02:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:37:46.544 * Looking for test storage... 00:37:46.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:37:46.544 02:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:37:47.922 Found 0000:08:00.0 (0x8086 - 0x159b) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:37:47.922 Found 0000:08:00.1 (0x8086 - 0x159b) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:37:47.922 Found net devices under 0000:08:00.0: cvl_0_0 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:37:47.922 Found net devices under 0000:08:00.1: cvl_0_1 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:47.922 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:48.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:48.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:37:48.182 00:37:48.182 --- 10.0.0.2 ping statistics --- 00:37:48.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.182 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:48.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:48.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:37:48.182 00:37:48.182 --- 10.0.0.1 ping statistics --- 00:37:48.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.182 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1951618 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1951618 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1951618 ']' 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:48.182 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.182 [2024-07-11 02:43:38.519537] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:37:48.182 [2024-07-11 02:43:38.519629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:48.182 EAL: No free 2048 kB hugepages reported on node 1 00:37:48.182 [2024-07-11 02:43:38.583897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.440 [2024-07-11 02:43:38.670447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:48.440 [2024-07-11 02:43:38.670517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:48.440 [2024-07-11 02:43:38.670536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:48.441 [2024-07-11 02:43:38.670550] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:48.441 [2024-07-11 02:43:38.670562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:48.441 [2024-07-11 02:43:38.670590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.441 [2024-07-11 02:43:38.800790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.441 [2024-07-11 02:43:38.808930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.441 null0 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.441 null1 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1951642 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1951642 /tmp/host.sock 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1951642 ']' 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:37:48.441 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:48.441 02:43:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.699 [2024-07-11 02:43:38.887425] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:37:48.699 [2024-07-11 02:43:38.887534] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1951642 ] 00:37:48.699 EAL: No free 2048 kB hugepages reported on node 1 00:37:48.699 [2024-07-11 02:43:38.948270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.699 [2024-07-11 02:43:39.035708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.957 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:48.957 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:37:48.957 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:48.957 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:48.958 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:49.217 [2024-07-11 02:43:39.426617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:37:49.217 02:43:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:37:49.783 [2024-07-11 02:43:40.199638] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:49.783 [2024-07-11 02:43:40.199687] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:49.783 [2024-07-11 02:43:40.199712] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:50.040 [2024-07-11 02:43:40.286979] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:37:50.040 [2024-07-11 02:43:40.349461] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:50.040 [2024-07-11 02:43:40.349487] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:50.298 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.557 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.816 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:37:50.816 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:50.816 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:37:50.816 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:50.816 02:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:37:50.816 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.816 02:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.816 [2024-07-11 02:43:40.999146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:50.816 [2024-07-11 02:43:41.000114] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:37:50.816 [2024-07-11 02:43:41.000156] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.816 [2024-07-11 02:43:41.085758] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:37:50.816 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:50.817 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:50.817 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.817 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:50.817 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:50.817 02:43:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:50.817 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.817 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:37:50.817 02:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:37:50.817 [2024-07-11 02:43:41.144243] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:50.817 [2024-07-11 02:43:41.144272] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:50.817 [2024-07-11 02:43:41.144285] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:51.750 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:52.011 [2024-07-11 02:43:42.239008] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:37:52.011 [2024-07-11 02:43:42.239054] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:52.011 [2024-07-11 02:43:42.248096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:52.011 [2024-07-11 02:43:42.248132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.011 [2024-07-11 02:43:42.248152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:52.011 [2024-07-11 02:43:42.248168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.011 [2024-07-11 02:43:42.248185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:52.011 [2024-07-11 02:43:42.248200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.011 [2024-07-11 02:43:42.248217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:52.011 [2024-07-11 02:43:42.248232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:52.011 [2024-07-11 02:43:42.248247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.011 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.011 [2024-07-11 02:43:42.258121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.011 [2024-07-11 02:43:42.268157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.011 [2024-07-11 02:43:42.268341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.011 [2024-07-11 02:43:42.268371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.011 [2024-07-11 02:43:42.268390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.011 [2024-07-11 02:43:42.268416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.011 [2024-07-11 02:43:42.268439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.011 [2024-07-11 02:43:42.268454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.011 [2024-07-11 02:43:42.268479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.012 [2024-07-11 02:43:42.268502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.012 [2024-07-11 02:43:42.278247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.012 [2024-07-11 02:43:42.278403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.012 [2024-07-11 02:43:42.278432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.012 [2024-07-11 02:43:42.278450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.012 [2024-07-11 02:43:42.278474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.012 [2024-07-11 02:43:42.278496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.012 [2024-07-11 02:43:42.278519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.012 [2024-07-11 02:43:42.278537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.012 [2024-07-11 02:43:42.278565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:52.012 [2024-07-11 02:43:42.288331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.012 [2024-07-11 02:43:42.288506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.012 [2024-07-11 02:43:42.288542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.012 [2024-07-11 02:43:42.288560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.012 [2024-07-11 02:43:42.288585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.012 [2024-07-11 02:43:42.288608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.012 [2024-07-11 02:43:42.288623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.012 [2024-07-11 02:43:42.288639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.012 [2024-07-11 02:43:42.288660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:52.012 [2024-07-11 02:43:42.298411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.012 [2024-07-11 02:43:42.298580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.012 [2024-07-11 02:43:42.298610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.012 [2024-07-11 02:43:42.298628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.012 [2024-07-11 02:43:42.298653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.012 [2024-07-11 02:43:42.298675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.012 [2024-07-11 02:43:42.298691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.012 [2024-07-11 02:43:42.298707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.012 [2024-07-11 02:43:42.298729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.012 [2024-07-11 02:43:42.308488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.012 [2024-07-11 02:43:42.308632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.012 [2024-07-11 02:43:42.308661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.012 [2024-07-11 02:43:42.308680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.012 [2024-07-11 02:43:42.308705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.012 [2024-07-11 02:43:42.308727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.012 [2024-07-11 02:43:42.308743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.012 [2024-07-11 02:43:42.308758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.012 [2024-07-11 02:43:42.308786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.012 [2024-07-11 02:43:42.318569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.012 [2024-07-11 02:43:42.318716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.012 [2024-07-11 02:43:42.318744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.012 [2024-07-11 02:43:42.318762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.012 [2024-07-11 02:43:42.318787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.012 [2024-07-11 02:43:42.318809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.012 [2024-07-11 02:43:42.318824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.012 [2024-07-11 02:43:42.318840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.012 [2024-07-11 02:43:42.318862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.012 [2024-07-11 02:43:42.328643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.012 [2024-07-11 02:43:42.328843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.012 [2024-07-11 02:43:42.328872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.012 [2024-07-11 02:43:42.328896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.012 [2024-07-11 02:43:42.328921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.012 [2024-07-11 02:43:42.328943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.012 [2024-07-11 02:43:42.328958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.012 [2024-07-11 02:43:42.328974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.012 [2024-07-11 02:43:42.328995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:52.012 [2024-07-11 02:43:42.338722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.012 [2024-07-11 02:43:42.338902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.012 [2024-07-11 02:43:42.338932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.012 [2024-07-11 02:43:42.338950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.012 [2024-07-11 02:43:42.338976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.012 [2024-07-11 02:43:42.338999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.012 [2024-07-11 02:43:42.339014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.012 [2024-07-11 02:43:42.339030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.012 [2024-07-11 02:43:42.339051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.012 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:52.012 [2024-07-11 02:43:42.348800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.012 [2024-07-11 02:43:42.348951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.012 [2024-07-11 02:43:42.348979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.012 [2024-07-11 02:43:42.349003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.012 [2024-07-11 02:43:42.349034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.012 [2024-07-11 02:43:42.349056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.012 [2024-07-11 02:43:42.349071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.012 [2024-07-11 02:43:42.349086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.012 [2024-07-11 02:43:42.349108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.012 [2024-07-11 02:43:42.358875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:52.013 [2024-07-11 02:43:42.359022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.013 [2024-07-11 02:43:42.359050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8140 with addr=10.0.0.2, port=4420 00:37:52.013 [2024-07-11 02:43:42.359067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8140 is same with the state(5) to be set 00:37:52.013 [2024-07-11 02:43:42.359091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca8140 (9): Bad file descriptor 00:37:52.013 [2024-07-11 02:43:42.359113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:52.013 [2024-07-11 02:43:42.359129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:52.013 [2024-07-11 02:43:42.359144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:52.013 [2024-07-11 02:43:42.359166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.013 [2024-07-11 02:43:42.366227] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:37:52.013 [2024-07-11 02:43:42.366258] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:52.013 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:37:52.013 02:43:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:52.980 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.238 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:53.239 02:43:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:54.613 [2024-07-11 02:43:44.660274] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:54.613 [2024-07-11 02:43:44.660305] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:54.613 [2024-07-11 02:43:44.660330] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:54.613 [2024-07-11 02:43:44.747621] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:37:54.613 [2024-07-11 02:43:44.854748] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:54.613 [2024-07-11 02:43:44.854806] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:54.613 request: 00:37:54.613 { 00:37:54.613 "name": "nvme", 00:37:54.613 "trtype": "tcp", 00:37:54.613 "traddr": "10.0.0.2", 00:37:54.613 "adrfam": "ipv4", 00:37:54.613 "trsvcid": "8009", 00:37:54.613 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:54.613 "wait_for_attach": true, 00:37:54.613 "method": "bdev_nvme_start_discovery", 00:37:54.613 "req_id": 1 00:37:54.613 } 00:37:54.613 Got JSON-RPC error response 00:37:54.613 response: 00:37:54.613 { 00:37:54.613 "code": -17, 00:37:54.613 "message": "File exists" 00:37:54.613 } 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.613 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:54.613 request: 00:37:54.613 { 00:37:54.613 "name": "nvme_second", 00:37:54.613 "trtype": "tcp", 00:37:54.613 "traddr": "10.0.0.2", 00:37:54.613 "adrfam": "ipv4", 00:37:54.613 "trsvcid": "8009", 00:37:54.613 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:54.613 "wait_for_attach": true, 00:37:54.613 "method": "bdev_nvme_start_discovery", 00:37:54.613 "req_id": 1 00:37:54.613 } 00:37:54.614 Got JSON-RPC error response 00:37:54.614 response: 00:37:54.614 { 00:37:54.614 "code": -17, 00:37:54.614 "message": "File exists" 00:37:54.614 } 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:54.614 02:43:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.614 02:43:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:37:54.614 02:43:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:37:54.614 02:43:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:54.614 02:43:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:54.614 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.614 02:43:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:54.614 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:54.614 02:43:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.873 02:43:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:55.807 [2024-07-11 02:43:46.075255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.807 [2024-07-11 02:43:46.075324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8ff0 with addr=10.0.0.2, port=8010 00:37:55.807 [2024-07-11 02:43:46.075354] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:55.807 [2024-07-11 02:43:46.075372] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:55.807 [2024-07-11 02:43:46.075387] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:37:56.741 [2024-07-11 02:43:47.077696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.742 [2024-07-11 02:43:47.077765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca8ff0 with addr=10.0.0.2, port=8010 00:37:56.742 [2024-07-11 02:43:47.077794] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:56.742 [2024-07-11 02:43:47.077812] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:56.742 [2024-07-11 02:43:47.077827] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:37:57.676 [2024-07-11 02:43:48.079876] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:37:57.676 request: 00:37:57.676 { 00:37:57.676 "name": "nvme_second", 00:37:57.676 "trtype": "tcp", 00:37:57.676 "traddr": "10.0.0.2", 00:37:57.676 "adrfam": "ipv4", 00:37:57.676 "trsvcid": "8010", 00:37:57.676 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:57.676 "wait_for_attach": false, 00:37:57.676 "attach_timeout_ms": 3000, 00:37:57.676 "method": "bdev_nvme_start_discovery", 00:37:57.676 "req_id": 1 00:37:57.676 } 00:37:57.676 Got JSON-RPC error response 00:37:57.676 response: 00:37:57.676 { 00:37:57.676 "code": -110, 00:37:57.676 "message": "Connection timed out" 00:37:57.676 } 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:57.676 02:43:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1951642 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:57.934 rmmod nvme_tcp 00:37:57.934 rmmod nvme_fabrics 00:37:57.934 rmmod nvme_keyring 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1951618 ']' 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1951618 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1951618 ']' 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1951618 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1951618 00:37:57.934 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:57.935 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:57.935 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1951618' 00:37:57.935 killing process with pid 1951618 00:37:57.935 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1951618 00:37:57.935 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1951618 00:37:58.193 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:58.193 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:58.193 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:58.193 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:58.193 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:58.193 02:43:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:58.193 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:58.193 02:43:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.094 02:43:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:00.094 00:38:00.094 real 0m13.748s 00:38:00.094 user 0m21.003s 00:38:00.094 sys 0m2.601s 00:38:00.094 02:43:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:00.094 02:43:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:00.094 ************************************ 00:38:00.094 END TEST nvmf_host_discovery 00:38:00.094 ************************************ 00:38:00.094 02:43:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:38:00.094 02:43:50 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:38:00.094 02:43:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:00.094 02:43:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:00.094 02:43:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:00.094 ************************************ 00:38:00.094 START TEST nvmf_host_multipath_status 00:38:00.094 ************************************ 00:38:00.094 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:38:00.352 * Looking for test storage... 00:38:00.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:00.352 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:00.353 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:00.353 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.353 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:00.353 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.353 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:00.353 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:00.353 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:38:00.353 02:43:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:01.731 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:38:01.732 Found 0000:08:00.0 (0x8086 - 0x159b) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:38:01.732 Found 0000:08:00.1 (0x8086 - 0x159b) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:38:01.732 Found net devices under 0000:08:00.0: cvl_0_0 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:38:01.732 Found net devices under 0000:08:00.1: cvl_0_1 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:01.732 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:01.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:01.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:38:01.989 00:38:01.989 --- 10.0.0.2 ping statistics --- 00:38:01.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.989 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:01.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:01.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:38:01.989 00:38:01.989 --- 10.0.0.1 ping statistics --- 00:38:01.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.989 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:38:01.989 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1954121 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1954121 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1954121 ']' 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:01.990 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:01.990 [2024-07-11 02:43:52.264346] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:38:01.990 [2024-07-11 02:43:52.264451] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:01.990 EAL: No free 2048 kB hugepages reported on node 1 00:38:01.990 [2024-07-11 02:43:52.329254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:02.247 [2024-07-11 02:43:52.416093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:02.247 [2024-07-11 02:43:52.416147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:02.247 [2024-07-11 02:43:52.416163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:02.247 [2024-07-11 02:43:52.416176] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:02.247 [2024-07-11 02:43:52.416188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:02.247 [2024-07-11 02:43:52.416259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.247 [2024-07-11 02:43:52.416265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.247 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:02.247 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:38:02.247 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:02.247 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:02.247 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:02.247 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:02.247 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1954121 00:38:02.247 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:02.504 [2024-07-11 02:43:52.827534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.504 02:43:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:38:02.760 Malloc0 00:38:02.760 02:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:38:03.017 02:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:03.274 02:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:03.538 [2024-07-11 02:43:53.837734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:03.539 02:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:03.800 [2024-07-11 02:43:54.086520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:03.800 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1954341 00:38:03.800 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:38:03.800 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:03.801 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1954341 /var/tmp/bdevperf.sock 00:38:03.801 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1954341 ']' 00:38:03.801 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:03.801 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:03.801 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:03.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:03.801 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:03.801 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:04.058 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:04.058 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:38:04.058 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:38:04.316 02:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:38:04.881 Nvme0n1 00:38:04.881 02:43:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:38:05.445 Nvme0n1 00:38:05.445 02:43:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:38:05.445 02:43:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:38:07.342 02:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:38:07.342 02:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:38:07.600 02:43:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:08.166 02:43:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:38:09.101 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:38:09.101 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:09.101 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.101 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:09.359 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:09.359 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:09.359 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.359 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:09.617 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:09.617 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:09.617 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.617 02:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:09.875 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:09.875 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:09.875 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.875 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:10.134 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:10.134 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:10.134 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:10.134 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:10.702 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:10.702 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:10.702 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:10.702 02:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:10.702 02:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:10.702 02:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:38:10.702 02:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:11.268 02:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:11.526 02:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:38:12.459 02:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:38:12.459 02:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:12.459 02:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.459 02:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:12.718 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:12.718 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:12.718 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.718 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:13.016 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:13.016 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:13.016 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:13.016 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:13.273 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:13.273 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:13.273 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:13.273 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:13.530 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:13.530 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:13.530 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:13.530 02:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:13.788 02:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:13.788 02:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:13.788 02:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:13.788 02:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:14.046 02:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.046 02:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:38:14.046 02:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:14.304 02:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:38:14.561 02:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:38:15.931 02:44:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:38:15.931 02:44:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:15.931 02:44:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:15.931 02:44:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:15.931 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:15.931 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:15.931 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:15.931 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:16.189 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:16.189 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:16.189 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:16.189 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:16.446 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:16.446 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:16.446 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:16.446 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:16.703 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:16.703 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:16.703 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:16.703 02:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:16.961 02:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:16.961 02:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:16.961 02:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:16.961 02:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:17.219 02:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:17.219 02:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:38:17.219 02:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:17.476 02:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:38:17.733 02:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:38:18.666 02:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:38:18.666 02:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:18.666 02:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:18.666 02:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:18.923 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:18.923 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:18.923 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:18.923 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:19.180 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:19.180 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:19.180 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:19.180 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:19.438 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:19.438 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:19.438 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:19.438 02:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:20.000 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:20.000 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:20.000 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:20.000 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:20.257 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:20.257 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:20.257 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:20.257 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:20.515 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:20.515 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:38:20.515 02:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:38:20.773 02:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:38:21.031 02:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:38:21.963 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:38:21.963 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:21.963 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:21.963 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:22.528 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:22.528 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:22.528 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:22.528 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:22.786 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:22.786 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:22.786 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:22.786 02:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:23.044 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:23.044 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:23.044 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:23.044 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:23.306 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:23.306 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:23.306 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:23.306 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:23.570 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:23.570 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:23.570 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:23.570 02:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:23.828 02:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:23.828 02:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:38:23.828 02:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:38:24.085 02:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:24.343 02:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:38:25.716 02:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:38:25.716 02:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:25.716 02:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:25.716 02:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:25.716 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:25.716 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:25.716 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:25.716 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:25.975 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:25.975 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:25.975 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:25.975 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:26.233 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:26.233 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:26.491 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:26.491 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:26.749 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:26.749 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:26.749 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:26.749 02:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:27.006 02:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:27.006 02:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:27.006 02:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:27.006 02:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:27.264 02:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:27.264 02:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:38:27.522 02:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:38:27.523 02:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:38:27.780 02:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:27.780 02:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:38:29.153 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:38:29.153 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:29.153 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.153 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:29.153 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:29.153 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:29.153 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.153 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:29.411 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:29.411 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:29.411 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.411 02:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:29.976 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:29.976 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:29.976 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.976 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:30.234 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:30.234 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:30.234 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:30.234 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:30.234 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:30.234 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:30.234 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:30.234 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:30.492 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:30.492 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:38:30.492 02:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:30.750 02:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:38:31.008 02:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:38:32.379 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:38:32.379 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:32.379 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:32.379 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:32.379 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:32.379 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:32.379 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:32.379 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:32.637 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:32.637 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:32.637 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:32.637 02:44:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:32.895 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:32.895 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:32.895 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:32.895 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:33.152 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:33.152 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:33.152 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:33.152 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:33.410 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:33.410 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:33.410 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:33.410 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:33.700 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:33.700 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:38:33.700 02:44:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:33.983 02:44:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:38:34.240 02:44:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:38:35.171 02:44:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:38:35.171 02:44:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:35.171 02:44:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:35.171 02:44:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:35.428 02:44:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:35.428 02:44:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:35.428 02:44:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:35.428 02:44:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:35.685 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:35.685 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:35.685 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:35.685 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:35.942 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:35.942 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:35.942 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:35.942 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:36.199 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:36.199 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:36.199 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:36.199 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:36.457 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:36.457 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:36.457 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:36.457 02:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:36.714 02:44:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:36.714 02:44:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:38:36.714 02:44:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:38:36.971 02:44:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:38:37.229 02:44:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:38:38.160 02:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:38:38.160 02:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:38.160 02:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:38.160 02:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:38.418 02:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:38.418 02:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:38.418 02:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:38.418 02:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:38.984 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:38.984 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:38.984 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:38.984 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:39.241 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:39.242 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:39.242 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:39.242 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:39.499 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:39.499 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:39.499 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:39.499 02:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:39.757 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:39.757 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:39.757 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:39.757 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1954341 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1954341 ']' 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1954341 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1954341 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1954341' 00:38:40.015 killing process with pid 1954341 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1954341 00:38:40.015 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1954341 00:38:40.277 Connection closed with partial response: 00:38:40.277 00:38:40.277 00:38:40.277 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1954341 00:38:40.277 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:40.277 [2024-07-11 02:43:54.146836] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:38:40.277 [2024-07-11 02:43:54.146938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954341 ] 00:38:40.277 EAL: No free 2048 kB hugepages reported on node 1 00:38:40.277 [2024-07-11 02:43:54.201997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.277 [2024-07-11 02:43:54.289484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:40.277 Running I/O for 90 seconds... 00:38:40.277 [2024-07-11 02:44:11.035061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.035129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.035192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.035215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.035242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.035260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.035285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.035304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.035329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.035347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.035371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.035389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.035414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.035432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.035457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.035475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.036995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.037022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.037053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.037072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.037098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.277 [2024-07-11 02:44:11.037128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.037155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.277 [2024-07-11 02:44:11.037173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:40.277 [2024-07-11 02:44:11.037199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.277 [2024-07-11 02:44:11.037216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.037965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.037992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:40.278 [2024-07-11 02:44:11.038766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.278 [2024-07-11 02:44:11.038784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.038810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.038828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.038855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.038879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.038907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.038924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.038952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.038969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.038996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.279 [2024-07-11 02:44:11.039744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.279 [2024-07-11 02:44:11.039788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.279 [2024-07-11 02:44:11.039833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.279 [2024-07-11 02:44:11.039878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.279 [2024-07-11 02:44:11.039924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.039952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.039969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.040126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.040148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.040183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.040202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.040234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.040252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.040283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.040301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:40.279 [2024-07-11 02:44:11.040332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.279 [2024-07-11 02:44:11.040350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.040953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.040984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.280 [2024-07-11 02:44:11.041695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.041744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.041793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.041841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.041890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:40.280 [2024-07-11 02:44:11.041921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.280 [2024-07-11 02:44:11.041942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.041974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.041992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:11.042833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:11.042864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:11.042882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:27.494400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:27.494485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:27.494540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:27.494584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:27.494627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:27.494680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:27.494723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:27.494765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:27.494808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:27.494851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:27.494893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.281 [2024-07-11 02:44:27.494935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.494960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.281 [2024-07-11 02:44:27.494978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:40.281 [2024-07-11 02:44:27.495003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.282 [2024-07-11 02:44:27.495889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.282 [2024-07-11 02:44:27.495931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.495956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.282 [2024-07-11 02:44:27.495974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.496000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.282 [2024-07-11 02:44:27.496017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:40.282 [2024-07-11 02:44:27.496043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.282 [2024-07-11 02:44:27.496060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.496085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.496102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.496127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.496144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.496169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.496186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.496211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.496228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.496258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.496276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.496301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.496319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.496344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.496361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.496387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.496404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.497669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.283 [2024-07-11 02:44:27.497711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.283 [2024-07-11 02:44:27.497754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.283 [2024-07-11 02:44:27.497796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.283 [2024-07-11 02:44:27.497837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.283 [2024-07-11 02:44:27.497879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.283 [2024-07-11 02:44:27.497921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.283 [2024-07-11 02:44:27.497967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.497992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.283 [2024-07-11 02:44:27.498009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.498035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.283 [2024-07-11 02:44:27.498053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.499846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:40.283 [2024-07-11 02:44:27.499873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:40.283 [2024-07-11 02:44:27.499903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.499922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.499948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.499965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.499990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.500032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.500075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.500118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.500160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.500202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.500244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.500292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.500335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:40.284 [2024-07-11 02:44:27.500377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:40.284 [2024-07-11 02:44:27.500395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:40.284 Received shutdown signal, test time was about 34.464349 seconds 00:38:40.284 00:38:40.284 Latency(us) 00:38:40.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.284 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:40.284 Verification LBA range: start 0x0 length 0x4000 00:38:40.284 Nvme0n1 : 34.46 7240.37 28.28 0.00 0.00 17645.66 186.60 4026531.84 00:38:40.284 =================================================================================================================== 00:38:40.284 Total : 7240.37 28.28 0.00 0.00 17645.66 186.60 4026531.84 00:38:40.284 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:40.544 rmmod nvme_tcp 00:38:40.544 rmmod nvme_fabrics 00:38:40.544 rmmod nvme_keyring 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1954121 ']' 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1954121 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1954121 ']' 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1954121 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1954121 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1954121' 00:38:40.544 killing process with pid 1954121 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1954121 00:38:40.544 02:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1954121 00:38:40.803 02:44:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:40.803 02:44:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:40.803 02:44:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:40.803 02:44:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:40.803 02:44:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:40.803 02:44:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.803 02:44:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:40.803 02:44:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.340 02:44:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:43.340 00:38:43.340 real 0m42.689s 00:38:43.340 user 2m12.594s 00:38:43.340 sys 0m9.795s 00:38:43.340 02:44:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:43.340 02:44:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:43.340 ************************************ 00:38:43.340 END TEST nvmf_host_multipath_status 00:38:43.340 ************************************ 00:38:43.340 02:44:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:38:43.340 02:44:33 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:38:43.340 02:44:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:38:43.340 02:44:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:43.340 02:44:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:43.340 ************************************ 00:38:43.340 START TEST nvmf_discovery_remove_ifc 00:38:43.340 ************************************ 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:38:43.340 * Looking for test storage... 00:38:43.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:38:43.340 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:38:43.341 02:44:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:38:44.721 Found 0000:08:00.0 (0x8086 - 0x159b) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:38:44.721 Found 0000:08:00.1 (0x8086 - 0x159b) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:38:44.721 Found net devices under 0000:08:00.0: cvl_0_0 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:44.721 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:38:44.722 Found net devices under 0000:08:00.1: cvl_0_1 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:44.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:44.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:38:44.722 00:38:44.722 --- 10.0.0.2 ping statistics --- 00:38:44.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:44.722 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:44.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:44.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:38:44.722 00:38:44.722 --- 10.0.0.1 ping statistics --- 00:38:44.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:44.722 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1959302 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1959302 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1959302 ']' 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:44.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:44.722 02:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:44.722 [2024-07-11 02:44:34.999383] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:38:44.722 [2024-07-11 02:44:34.999474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:44.722 EAL: No free 2048 kB hugepages reported on node 1 00:38:44.722 [2024-07-11 02:44:35.068372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.981 [2024-07-11 02:44:35.154869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:44.981 [2024-07-11 02:44:35.154928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:44.981 [2024-07-11 02:44:35.154951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:44.981 [2024-07-11 02:44:35.154973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:44.981 [2024-07-11 02:44:35.154992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:44.981 [2024-07-11 02:44:35.155039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:44.981 [2024-07-11 02:44:35.289036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:44.981 [2024-07-11 02:44:35.297179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:38:44.981 null0 00:38:44.981 [2024-07-11 02:44:35.329148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1959407 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1959407 /tmp/host.sock 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1959407 ']' 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:38:44.981 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:44.981 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:44.981 [2024-07-11 02:44:35.398596] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:38:44.981 [2024-07-11 02:44:35.398691] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959407 ] 00:38:45.240 EAL: No free 2048 kB hugepages reported on node 1 00:38:45.240 [2024-07-11 02:44:35.458564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.240 [2024-07-11 02:44:35.545917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:45.240 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:45.499 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:45.499 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:38:45.499 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:45.499 02:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:46.433 [2024-07-11 02:44:36.797280] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:38:46.433 [2024-07-11 02:44:36.797325] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:38:46.434 [2024-07-11 02:44:36.797349] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:38:46.702 [2024-07-11 02:44:36.924776] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:38:46.702 [2024-07-11 02:44:36.988355] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:38:46.702 [2024-07-11 02:44:36.988432] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:38:46.702 [2024-07-11 02:44:36.988478] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:38:46.702 [2024-07-11 02:44:36.988504] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:38:46.702 [2024-07-11 02:44:36.988549] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:38:46.702 02:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.702 02:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:38:46.702 02:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:46.702 02:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:46.702 02:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:46.702 02:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.702 02:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:46.702 02:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:46.702 02:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:46.702 [2024-07-11 02:44:36.995055] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f0bec0 was disconnected and freed. delete nvme_qpair. 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:46.702 02:44:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:48.077 02:44:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:49.007 02:44:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:49.936 02:44:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:50.869 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:50.869 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:50.869 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:50.869 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:50.869 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:50.869 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:50.869 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:50.869 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:51.127 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:51.127 02:44:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:52.061 02:44:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:52.061 [2024-07-11 02:44:42.429834] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:38:52.061 [2024-07-11 02:44:42.429911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:52.061 [2024-07-11 02:44:42.429935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:52.061 [2024-07-11 02:44:42.429955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:52.061 [2024-07-11 02:44:42.429970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:52.061 [2024-07-11 02:44:42.429986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:52.061 [2024-07-11 02:44:42.430001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:52.061 [2024-07-11 02:44:42.430017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:52.061 [2024-07-11 02:44:42.430032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:52.061 [2024-07-11 02:44:42.430048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:38:52.061 [2024-07-11 02:44:42.430063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:52.061 [2024-07-11 02:44:42.430078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed26b0 is same with the state(5) to be set 00:38:52.061 [2024-07-11 02:44:42.439848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed26b0 (9): Bad file descriptor 00:38:52.061 [2024-07-11 02:44:42.449896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:52.996 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:52.996 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:52.996 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:52.996 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.996 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:52.996 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:52.996 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:53.254 [2024-07-11 02:44:43.473578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:38:53.254 [2024-07-11 02:44:43.473648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed26b0 with addr=10.0.0.2, port=4420 00:38:53.254 [2024-07-11 02:44:43.473675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed26b0 is same with the state(5) to be set 00:38:53.254 [2024-07-11 02:44:43.473728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed26b0 (9): Bad file descriptor 00:38:53.254 [2024-07-11 02:44:43.474218] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:38:53.254 [2024-07-11 02:44:43.474250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:53.254 [2024-07-11 02:44:43.474267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:38:53.254 [2024-07-11 02:44:43.474284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:53.254 [2024-07-11 02:44:43.474319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:53.254 [2024-07-11 02:44:43.474338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:53.254 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.254 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:53.254 02:44:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:54.184 [2024-07-11 02:44:44.476841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:54.184 [2024-07-11 02:44:44.476898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:54.184 [2024-07-11 02:44:44.476915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:38:54.184 [2024-07-11 02:44:44.476931] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:38:54.184 [2024-07-11 02:44:44.476966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:54.184 [2024-07-11 02:44:44.477008] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:38:54.184 [2024-07-11 02:44:44.477065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:54.184 [2024-07-11 02:44:44.477087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:54.184 [2024-07-11 02:44:44.477107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:54.184 [2024-07-11 02:44:44.477122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:54.184 [2024-07-11 02:44:44.477146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:54.184 [2024-07-11 02:44:44.477162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:54.184 [2024-07-11 02:44:44.477177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:54.184 [2024-07-11 02:44:44.477192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:54.184 [2024-07-11 02:44:44.477208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:38:54.184 [2024-07-11 02:44:44.477223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:54.184 [2024-07-11 02:44:44.477238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:38:54.184 [2024-07-11 02:44:44.477347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed1b30 (9): Bad file descriptor 00:38:54.184 [2024-07-11 02:44:44.478379] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:38:54.184 [2024-07-11 02:44:44.478403] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:54.184 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.441 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:38:54.441 02:44:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:38:55.425 02:44:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:56.357 [2024-07-11 02:44:46.532651] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:38:56.357 [2024-07-11 02:44:46.532680] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:38:56.357 [2024-07-11 02:44:46.532705] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:38:56.357 [2024-07-11 02:44:46.619978] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:38:56.357 02:44:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:56.614 [2024-07-11 02:44:46.806032] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:38:56.614 [2024-07-11 02:44:46.806085] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:38:56.614 [2024-07-11 02:44:46.806121] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:38:56.614 [2024-07-11 02:44:46.806146] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:38:56.614 [2024-07-11 02:44:46.806160] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:38:56.614 [2024-07-11 02:44:46.811194] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ec0370 was disconnected and freed. delete nvme_qpair. 00:38:57.546 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1959407 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1959407 ']' 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1959407 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1959407 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1959407' 00:38:57.547 killing process with pid 1959407 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1959407 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1959407 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:57.547 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:57.547 rmmod nvme_tcp 00:38:57.547 rmmod nvme_fabrics 00:38:57.805 rmmod nvme_keyring 00:38:57.805 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:57.805 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:38:57.805 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:38:57.805 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1959302 ']' 00:38:57.805 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1959302 00:38:57.805 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1959302 ']' 00:38:57.805 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1959302 00:38:57.805 02:44:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1959302 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1959302' 00:38:57.805 killing process with pid 1959302 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1959302 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1959302 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:57.805 02:44:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.340 02:44:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:00.340 00:39:00.340 real 0m17.015s 00:39:00.340 user 0m25.266s 00:39:00.340 sys 0m2.670s 00:39:00.340 02:44:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:00.340 02:44:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:39:00.340 ************************************ 00:39:00.340 END TEST nvmf_discovery_remove_ifc 00:39:00.340 ************************************ 00:39:00.340 02:44:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:39:00.340 02:44:50 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:39:00.340 02:44:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:00.340 02:44:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:00.340 02:44:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:00.340 ************************************ 00:39:00.340 START TEST nvmf_identify_kernel_target 00:39:00.340 ************************************ 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:39:00.340 * Looking for test storage... 00:39:00.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:39:00.340 02:44:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:39:01.719 Found 0000:08:00.0 (0x8086 - 0x159b) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:39:01.719 Found 0000:08:00.1 (0x8086 - 0x159b) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:39:01.719 Found net devices under 0000:08:00.0: cvl_0_0 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:39:01.719 Found net devices under 0000:08:00.1: cvl_0_1 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:01.719 02:44:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:01.719 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:01.719 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:01.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:01.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:39:01.719 00:39:01.719 --- 10.0.0.2 ping statistics --- 00:39:01.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.719 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:39:01.719 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:01.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:01.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:39:01.719 00:39:01.719 --- 10.0.0.1 ping statistics --- 00:39:01.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.719 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:39:01.719 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:01.719 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:39:01.719 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:01.719 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:01.720 02:44:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:02.667 Waiting for block devices as requested 00:39:02.667 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:39:02.925 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:39:02.925 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:39:02.925 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:39:02.925 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:39:02.926 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:39:03.185 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:39:03.185 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:39:03.185 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:39:03.185 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:39:03.443 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:39:03.443 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:39:03.443 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:39:03.699 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:39:03.699 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:39:03.699 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:39:03.956 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:03.956 No valid GPT data, bailing 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:39:03.956 00:39:03.956 Discovery Log Number of Records 2, Generation counter 2 00:39:03.956 =====Discovery Log Entry 0====== 00:39:03.956 trtype: tcp 00:39:03.956 adrfam: ipv4 00:39:03.956 subtype: current discovery subsystem 00:39:03.956 treq: not specified, sq flow control disable supported 00:39:03.956 portid: 1 00:39:03.956 trsvcid: 4420 00:39:03.956 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:03.956 traddr: 10.0.0.1 00:39:03.956 eflags: none 00:39:03.956 sectype: none 00:39:03.956 =====Discovery Log Entry 1====== 00:39:03.956 trtype: tcp 00:39:03.956 adrfam: ipv4 00:39:03.956 subtype: nvme subsystem 00:39:03.956 treq: not specified, sq flow control disable supported 00:39:03.956 portid: 1 00:39:03.956 trsvcid: 4420 00:39:03.956 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:03.956 traddr: 10.0.0.1 00:39:03.956 eflags: none 00:39:03.956 sectype: none 00:39:03.956 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:39:03.956 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:39:04.215 EAL: No free 2048 kB hugepages reported on node 1 00:39:04.215 ===================================================== 00:39:04.215 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:39:04.215 ===================================================== 00:39:04.215 Controller Capabilities/Features 00:39:04.215 ================================ 00:39:04.215 Vendor ID: 0000 00:39:04.215 Subsystem Vendor ID: 0000 00:39:04.215 Serial Number: 21fb28278b083a274ac7 00:39:04.215 Model Number: Linux 00:39:04.215 Firmware Version: 6.7.0-68 00:39:04.215 Recommended Arb Burst: 0 00:39:04.215 IEEE OUI Identifier: 00 00 00 00:39:04.215 Multi-path I/O 00:39:04.215 May have multiple subsystem ports: No 00:39:04.215 May have multiple controllers: No 00:39:04.215 Associated with SR-IOV VF: No 00:39:04.215 Max Data Transfer Size: Unlimited 00:39:04.215 Max Number of Namespaces: 0 00:39:04.215 Max Number of I/O Queues: 1024 00:39:04.215 NVMe Specification Version (VS): 1.3 00:39:04.215 NVMe Specification Version (Identify): 1.3 00:39:04.215 Maximum Queue Entries: 1024 00:39:04.215 Contiguous Queues Required: No 00:39:04.215 Arbitration Mechanisms Supported 00:39:04.215 Weighted Round Robin: Not Supported 00:39:04.215 Vendor Specific: Not Supported 00:39:04.215 Reset Timeout: 7500 ms 00:39:04.215 Doorbell Stride: 4 bytes 00:39:04.215 NVM Subsystem Reset: Not Supported 00:39:04.215 Command Sets Supported 00:39:04.215 NVM Command Set: Supported 00:39:04.215 Boot Partition: Not Supported 00:39:04.215 Memory Page Size Minimum: 4096 bytes 00:39:04.215 Memory Page Size Maximum: 4096 bytes 00:39:04.215 Persistent Memory Region: Not Supported 00:39:04.215 Optional Asynchronous Events Supported 00:39:04.215 Namespace Attribute Notices: Not Supported 00:39:04.215 Firmware Activation Notices: Not Supported 00:39:04.215 ANA Change Notices: Not Supported 00:39:04.215 PLE Aggregate Log Change Notices: Not Supported 00:39:04.215 LBA Status Info Alert Notices: Not Supported 00:39:04.215 EGE Aggregate Log Change Notices: Not Supported 00:39:04.215 Normal NVM Subsystem Shutdown event: Not Supported 00:39:04.215 Zone Descriptor Change Notices: Not Supported 00:39:04.215 Discovery Log Change Notices: Supported 00:39:04.215 Controller Attributes 00:39:04.215 128-bit Host Identifier: Not Supported 00:39:04.215 Non-Operational Permissive Mode: Not Supported 00:39:04.215 NVM Sets: Not Supported 00:39:04.215 Read Recovery Levels: Not Supported 00:39:04.215 Endurance Groups: Not Supported 00:39:04.215 Predictable Latency Mode: Not Supported 00:39:04.215 Traffic Based Keep ALive: Not Supported 00:39:04.215 Namespace Granularity: Not Supported 00:39:04.215 SQ Associations: Not Supported 00:39:04.215 UUID List: Not Supported 00:39:04.215 Multi-Domain Subsystem: Not Supported 00:39:04.215 Fixed Capacity Management: Not Supported 00:39:04.215 Variable Capacity Management: Not Supported 00:39:04.215 Delete Endurance Group: Not Supported 00:39:04.215 Delete NVM Set: Not Supported 00:39:04.215 Extended LBA Formats Supported: Not Supported 00:39:04.215 Flexible Data Placement Supported: Not Supported 00:39:04.215 00:39:04.215 Controller Memory Buffer Support 00:39:04.215 ================================ 00:39:04.215 Supported: No 00:39:04.215 00:39:04.215 Persistent Memory Region Support 00:39:04.215 ================================ 00:39:04.215 Supported: No 00:39:04.215 00:39:04.215 Admin Command Set Attributes 00:39:04.215 ============================ 00:39:04.215 Security Send/Receive: Not Supported 00:39:04.215 Format NVM: Not Supported 00:39:04.215 Firmware Activate/Download: Not Supported 00:39:04.215 Namespace Management: Not Supported 00:39:04.215 Device Self-Test: Not Supported 00:39:04.215 Directives: Not Supported 00:39:04.215 NVMe-MI: Not Supported 00:39:04.215 Virtualization Management: Not Supported 00:39:04.215 Doorbell Buffer Config: Not Supported 00:39:04.215 Get LBA Status Capability: Not Supported 00:39:04.215 Command & Feature Lockdown Capability: Not Supported 00:39:04.215 Abort Command Limit: 1 00:39:04.215 Async Event Request Limit: 1 00:39:04.215 Number of Firmware Slots: N/A 00:39:04.215 Firmware Slot 1 Read-Only: N/A 00:39:04.215 Firmware Activation Without Reset: N/A 00:39:04.215 Multiple Update Detection Support: N/A 00:39:04.215 Firmware Update Granularity: No Information Provided 00:39:04.215 Per-Namespace SMART Log: No 00:39:04.215 Asymmetric Namespace Access Log Page: Not Supported 00:39:04.215 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:39:04.215 Command Effects Log Page: Not Supported 00:39:04.215 Get Log Page Extended Data: Supported 00:39:04.215 Telemetry Log Pages: Not Supported 00:39:04.215 Persistent Event Log Pages: Not Supported 00:39:04.216 Supported Log Pages Log Page: May Support 00:39:04.216 Commands Supported & Effects Log Page: Not Supported 00:39:04.216 Feature Identifiers & Effects Log Page:May Support 00:39:04.216 NVMe-MI Commands & Effects Log Page: May Support 00:39:04.216 Data Area 4 for Telemetry Log: Not Supported 00:39:04.216 Error Log Page Entries Supported: 1 00:39:04.216 Keep Alive: Not Supported 00:39:04.216 00:39:04.216 NVM Command Set Attributes 00:39:04.216 ========================== 00:39:04.216 Submission Queue Entry Size 00:39:04.216 Max: 1 00:39:04.216 Min: 1 00:39:04.216 Completion Queue Entry Size 00:39:04.216 Max: 1 00:39:04.216 Min: 1 00:39:04.216 Number of Namespaces: 0 00:39:04.216 Compare Command: Not Supported 00:39:04.216 Write Uncorrectable Command: Not Supported 00:39:04.216 Dataset Management Command: Not Supported 00:39:04.216 Write Zeroes Command: Not Supported 00:39:04.216 Set Features Save Field: Not Supported 00:39:04.216 Reservations: Not Supported 00:39:04.216 Timestamp: Not Supported 00:39:04.216 Copy: Not Supported 00:39:04.216 Volatile Write Cache: Not Present 00:39:04.216 Atomic Write Unit (Normal): 1 00:39:04.216 Atomic Write Unit (PFail): 1 00:39:04.216 Atomic Compare & Write Unit: 1 00:39:04.216 Fused Compare & Write: Not Supported 00:39:04.216 Scatter-Gather List 00:39:04.216 SGL Command Set: Supported 00:39:04.216 SGL Keyed: Not Supported 00:39:04.216 SGL Bit Bucket Descriptor: Not Supported 00:39:04.216 SGL Metadata Pointer: Not Supported 00:39:04.216 Oversized SGL: Not Supported 00:39:04.216 SGL Metadata Address: Not Supported 00:39:04.216 SGL Offset: Supported 00:39:04.216 Transport SGL Data Block: Not Supported 00:39:04.216 Replay Protected Memory Block: Not Supported 00:39:04.216 00:39:04.216 Firmware Slot Information 00:39:04.216 ========================= 00:39:04.216 Active slot: 0 00:39:04.216 00:39:04.216 00:39:04.216 Error Log 00:39:04.216 ========= 00:39:04.216 00:39:04.216 Active Namespaces 00:39:04.216 ================= 00:39:04.216 Discovery Log Page 00:39:04.216 ================== 00:39:04.216 Generation Counter: 2 00:39:04.216 Number of Records: 2 00:39:04.216 Record Format: 0 00:39:04.216 00:39:04.216 Discovery Log Entry 0 00:39:04.216 ---------------------- 00:39:04.216 Transport Type: 3 (TCP) 00:39:04.216 Address Family: 1 (IPv4) 00:39:04.216 Subsystem Type: 3 (Current Discovery Subsystem) 00:39:04.216 Entry Flags: 00:39:04.216 Duplicate Returned Information: 0 00:39:04.216 Explicit Persistent Connection Support for Discovery: 0 00:39:04.216 Transport Requirements: 00:39:04.216 Secure Channel: Not Specified 00:39:04.216 Port ID: 1 (0x0001) 00:39:04.216 Controller ID: 65535 (0xffff) 00:39:04.216 Admin Max SQ Size: 32 00:39:04.216 Transport Service Identifier: 4420 00:39:04.216 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:39:04.216 Transport Address: 10.0.0.1 00:39:04.216 Discovery Log Entry 1 00:39:04.216 ---------------------- 00:39:04.216 Transport Type: 3 (TCP) 00:39:04.216 Address Family: 1 (IPv4) 00:39:04.216 Subsystem Type: 2 (NVM Subsystem) 00:39:04.216 Entry Flags: 00:39:04.216 Duplicate Returned Information: 0 00:39:04.216 Explicit Persistent Connection Support for Discovery: 0 00:39:04.216 Transport Requirements: 00:39:04.216 Secure Channel: Not Specified 00:39:04.216 Port ID: 1 (0x0001) 00:39:04.216 Controller ID: 65535 (0xffff) 00:39:04.216 Admin Max SQ Size: 32 00:39:04.216 Transport Service Identifier: 4420 00:39:04.216 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:39:04.216 Transport Address: 10.0.0.1 00:39:04.216 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:04.216 EAL: No free 2048 kB hugepages reported on node 1 00:39:04.216 get_feature(0x01) failed 00:39:04.216 get_feature(0x02) failed 00:39:04.216 get_feature(0x04) failed 00:39:04.216 ===================================================== 00:39:04.216 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:04.216 ===================================================== 00:39:04.216 Controller Capabilities/Features 00:39:04.216 ================================ 00:39:04.216 Vendor ID: 0000 00:39:04.216 Subsystem Vendor ID: 0000 00:39:04.216 Serial Number: 0e2fb2e826e522d82229 00:39:04.216 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:39:04.216 Firmware Version: 6.7.0-68 00:39:04.216 Recommended Arb Burst: 6 00:39:04.216 IEEE OUI Identifier: 00 00 00 00:39:04.216 Multi-path I/O 00:39:04.216 May have multiple subsystem ports: Yes 00:39:04.216 May have multiple controllers: Yes 00:39:04.216 Associated with SR-IOV VF: No 00:39:04.216 Max Data Transfer Size: Unlimited 00:39:04.216 Max Number of Namespaces: 1024 00:39:04.216 Max Number of I/O Queues: 128 00:39:04.216 NVMe Specification Version (VS): 1.3 00:39:04.216 NVMe Specification Version (Identify): 1.3 00:39:04.216 Maximum Queue Entries: 1024 00:39:04.216 Contiguous Queues Required: No 00:39:04.216 Arbitration Mechanisms Supported 00:39:04.216 Weighted Round Robin: Not Supported 00:39:04.216 Vendor Specific: Not Supported 00:39:04.216 Reset Timeout: 7500 ms 00:39:04.216 Doorbell Stride: 4 bytes 00:39:04.216 NVM Subsystem Reset: Not Supported 00:39:04.216 Command Sets Supported 00:39:04.216 NVM Command Set: Supported 00:39:04.216 Boot Partition: Not Supported 00:39:04.216 Memory Page Size Minimum: 4096 bytes 00:39:04.216 Memory Page Size Maximum: 4096 bytes 00:39:04.216 Persistent Memory Region: Not Supported 00:39:04.216 Optional Asynchronous Events Supported 00:39:04.216 Namespace Attribute Notices: Supported 00:39:04.216 Firmware Activation Notices: Not Supported 00:39:04.217 ANA Change Notices: Supported 00:39:04.217 PLE Aggregate Log Change Notices: Not Supported 00:39:04.217 LBA Status Info Alert Notices: Not Supported 00:39:04.217 EGE Aggregate Log Change Notices: Not Supported 00:39:04.217 Normal NVM Subsystem Shutdown event: Not Supported 00:39:04.217 Zone Descriptor Change Notices: Not Supported 00:39:04.217 Discovery Log Change Notices: Not Supported 00:39:04.217 Controller Attributes 00:39:04.217 128-bit Host Identifier: Supported 00:39:04.217 Non-Operational Permissive Mode: Not Supported 00:39:04.217 NVM Sets: Not Supported 00:39:04.217 Read Recovery Levels: Not Supported 00:39:04.217 Endurance Groups: Not Supported 00:39:04.217 Predictable Latency Mode: Not Supported 00:39:04.217 Traffic Based Keep ALive: Supported 00:39:04.217 Namespace Granularity: Not Supported 00:39:04.217 SQ Associations: Not Supported 00:39:04.217 UUID List: Not Supported 00:39:04.217 Multi-Domain Subsystem: Not Supported 00:39:04.217 Fixed Capacity Management: Not Supported 00:39:04.217 Variable Capacity Management: Not Supported 00:39:04.217 Delete Endurance Group: Not Supported 00:39:04.217 Delete NVM Set: Not Supported 00:39:04.217 Extended LBA Formats Supported: Not Supported 00:39:04.217 Flexible Data Placement Supported: Not Supported 00:39:04.217 00:39:04.217 Controller Memory Buffer Support 00:39:04.217 ================================ 00:39:04.217 Supported: No 00:39:04.217 00:39:04.217 Persistent Memory Region Support 00:39:04.217 ================================ 00:39:04.217 Supported: No 00:39:04.217 00:39:04.217 Admin Command Set Attributes 00:39:04.217 ============================ 00:39:04.217 Security Send/Receive: Not Supported 00:39:04.217 Format NVM: Not Supported 00:39:04.217 Firmware Activate/Download: Not Supported 00:39:04.217 Namespace Management: Not Supported 00:39:04.217 Device Self-Test: Not Supported 00:39:04.217 Directives: Not Supported 00:39:04.217 NVMe-MI: Not Supported 00:39:04.217 Virtualization Management: Not Supported 00:39:04.217 Doorbell Buffer Config: Not Supported 00:39:04.217 Get LBA Status Capability: Not Supported 00:39:04.217 Command & Feature Lockdown Capability: Not Supported 00:39:04.217 Abort Command Limit: 4 00:39:04.217 Async Event Request Limit: 4 00:39:04.217 Number of Firmware Slots: N/A 00:39:04.217 Firmware Slot 1 Read-Only: N/A 00:39:04.217 Firmware Activation Without Reset: N/A 00:39:04.217 Multiple Update Detection Support: N/A 00:39:04.217 Firmware Update Granularity: No Information Provided 00:39:04.217 Per-Namespace SMART Log: Yes 00:39:04.217 Asymmetric Namespace Access Log Page: Supported 00:39:04.217 ANA Transition Time : 10 sec 00:39:04.217 00:39:04.217 Asymmetric Namespace Access Capabilities 00:39:04.217 ANA Optimized State : Supported 00:39:04.217 ANA Non-Optimized State : Supported 00:39:04.217 ANA Inaccessible State : Supported 00:39:04.217 ANA Persistent Loss State : Supported 00:39:04.217 ANA Change State : Supported 00:39:04.217 ANAGRPID is not changed : No 00:39:04.217 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:39:04.217 00:39:04.217 ANA Group Identifier Maximum : 128 00:39:04.217 Number of ANA Group Identifiers : 128 00:39:04.217 Max Number of Allowed Namespaces : 1024 00:39:04.217 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:39:04.217 Command Effects Log Page: Supported 00:39:04.217 Get Log Page Extended Data: Supported 00:39:04.217 Telemetry Log Pages: Not Supported 00:39:04.217 Persistent Event Log Pages: Not Supported 00:39:04.217 Supported Log Pages Log Page: May Support 00:39:04.217 Commands Supported & Effects Log Page: Not Supported 00:39:04.217 Feature Identifiers & Effects Log Page:May Support 00:39:04.217 NVMe-MI Commands & Effects Log Page: May Support 00:39:04.217 Data Area 4 for Telemetry Log: Not Supported 00:39:04.217 Error Log Page Entries Supported: 128 00:39:04.217 Keep Alive: Supported 00:39:04.217 Keep Alive Granularity: 1000 ms 00:39:04.217 00:39:04.217 NVM Command Set Attributes 00:39:04.217 ========================== 00:39:04.217 Submission Queue Entry Size 00:39:04.217 Max: 64 00:39:04.217 Min: 64 00:39:04.217 Completion Queue Entry Size 00:39:04.217 Max: 16 00:39:04.217 Min: 16 00:39:04.217 Number of Namespaces: 1024 00:39:04.217 Compare Command: Not Supported 00:39:04.217 Write Uncorrectable Command: Not Supported 00:39:04.217 Dataset Management Command: Supported 00:39:04.217 Write Zeroes Command: Supported 00:39:04.217 Set Features Save Field: Not Supported 00:39:04.217 Reservations: Not Supported 00:39:04.217 Timestamp: Not Supported 00:39:04.217 Copy: Not Supported 00:39:04.217 Volatile Write Cache: Present 00:39:04.217 Atomic Write Unit (Normal): 1 00:39:04.217 Atomic Write Unit (PFail): 1 00:39:04.217 Atomic Compare & Write Unit: 1 00:39:04.217 Fused Compare & Write: Not Supported 00:39:04.217 Scatter-Gather List 00:39:04.217 SGL Command Set: Supported 00:39:04.217 SGL Keyed: Not Supported 00:39:04.217 SGL Bit Bucket Descriptor: Not Supported 00:39:04.217 SGL Metadata Pointer: Not Supported 00:39:04.217 Oversized SGL: Not Supported 00:39:04.217 SGL Metadata Address: Not Supported 00:39:04.217 SGL Offset: Supported 00:39:04.217 Transport SGL Data Block: Not Supported 00:39:04.217 Replay Protected Memory Block: Not Supported 00:39:04.217 00:39:04.217 Firmware Slot Information 00:39:04.217 ========================= 00:39:04.217 Active slot: 0 00:39:04.217 00:39:04.217 Asymmetric Namespace Access 00:39:04.217 =========================== 00:39:04.217 Change Count : 0 00:39:04.217 Number of ANA Group Descriptors : 1 00:39:04.217 ANA Group Descriptor : 0 00:39:04.217 ANA Group ID : 1 00:39:04.217 Number of NSID Values : 1 00:39:04.217 Change Count : 0 00:39:04.217 ANA State : 1 00:39:04.218 Namespace Identifier : 1 00:39:04.218 00:39:04.218 Commands Supported and Effects 00:39:04.218 ============================== 00:39:04.218 Admin Commands 00:39:04.218 -------------- 00:39:04.218 Get Log Page (02h): Supported 00:39:04.218 Identify (06h): Supported 00:39:04.218 Abort (08h): Supported 00:39:04.218 Set Features (09h): Supported 00:39:04.218 Get Features (0Ah): Supported 00:39:04.218 Asynchronous Event Request (0Ch): Supported 00:39:04.218 Keep Alive (18h): Supported 00:39:04.218 I/O Commands 00:39:04.218 ------------ 00:39:04.218 Flush (00h): Supported 00:39:04.218 Write (01h): Supported LBA-Change 00:39:04.218 Read (02h): Supported 00:39:04.218 Write Zeroes (08h): Supported LBA-Change 00:39:04.218 Dataset Management (09h): Supported 00:39:04.218 00:39:04.218 Error Log 00:39:04.218 ========= 00:39:04.218 Entry: 0 00:39:04.218 Error Count: 0x3 00:39:04.218 Submission Queue Id: 0x0 00:39:04.218 Command Id: 0x5 00:39:04.218 Phase Bit: 0 00:39:04.218 Status Code: 0x2 00:39:04.218 Status Code Type: 0x0 00:39:04.218 Do Not Retry: 1 00:39:04.218 Error Location: 0x28 00:39:04.218 LBA: 0x0 00:39:04.218 Namespace: 0x0 00:39:04.218 Vendor Log Page: 0x0 00:39:04.218 ----------- 00:39:04.218 Entry: 1 00:39:04.218 Error Count: 0x2 00:39:04.218 Submission Queue Id: 0x0 00:39:04.218 Command Id: 0x5 00:39:04.218 Phase Bit: 0 00:39:04.218 Status Code: 0x2 00:39:04.218 Status Code Type: 0x0 00:39:04.218 Do Not Retry: 1 00:39:04.218 Error Location: 0x28 00:39:04.218 LBA: 0x0 00:39:04.218 Namespace: 0x0 00:39:04.218 Vendor Log Page: 0x0 00:39:04.218 ----------- 00:39:04.218 Entry: 2 00:39:04.218 Error Count: 0x1 00:39:04.218 Submission Queue Id: 0x0 00:39:04.218 Command Id: 0x4 00:39:04.218 Phase Bit: 0 00:39:04.218 Status Code: 0x2 00:39:04.218 Status Code Type: 0x0 00:39:04.218 Do Not Retry: 1 00:39:04.218 Error Location: 0x28 00:39:04.218 LBA: 0x0 00:39:04.218 Namespace: 0x0 00:39:04.218 Vendor Log Page: 0x0 00:39:04.218 00:39:04.218 Number of Queues 00:39:04.218 ================ 00:39:04.218 Number of I/O Submission Queues: 128 00:39:04.218 Number of I/O Completion Queues: 128 00:39:04.218 00:39:04.218 ZNS Specific Controller Data 00:39:04.218 ============================ 00:39:04.218 Zone Append Size Limit: 0 00:39:04.218 00:39:04.218 00:39:04.218 Active Namespaces 00:39:04.218 ================= 00:39:04.218 get_feature(0x05) failed 00:39:04.218 Namespace ID:1 00:39:04.218 Command Set Identifier: NVM (00h) 00:39:04.218 Deallocate: Supported 00:39:04.218 Deallocated/Unwritten Error: Not Supported 00:39:04.218 Deallocated Read Value: Unknown 00:39:04.218 Deallocate in Write Zeroes: Not Supported 00:39:04.218 Deallocated Guard Field: 0xFFFF 00:39:04.218 Flush: Supported 00:39:04.218 Reservation: Not Supported 00:39:04.218 Namespace Sharing Capabilities: Multiple Controllers 00:39:04.218 Size (in LBAs): 1953525168 (931GiB) 00:39:04.218 Capacity (in LBAs): 1953525168 (931GiB) 00:39:04.218 Utilization (in LBAs): 1953525168 (931GiB) 00:39:04.218 UUID: 72e76883-c36f-4937-be05-36665775d0da 00:39:04.218 Thin Provisioning: Not Supported 00:39:04.218 Per-NS Atomic Units: Yes 00:39:04.218 Atomic Boundary Size (Normal): 0 00:39:04.218 Atomic Boundary Size (PFail): 0 00:39:04.218 Atomic Boundary Offset: 0 00:39:04.218 NGUID/EUI64 Never Reused: No 00:39:04.218 ANA group ID: 1 00:39:04.218 Namespace Write Protected: No 00:39:04.218 Number of LBA Formats: 1 00:39:04.218 Current LBA Format: LBA Format #00 00:39:04.218 LBA Format #00: Data Size: 512 Metadata Size: 0 00:39:04.218 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:04.218 rmmod nvme_tcp 00:39:04.218 rmmod nvme_fabrics 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:04.218 02:44:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:39:06.753 02:44:56 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:07.319 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:39:07.319 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:39:07.319 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:39:07.319 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:39:07.319 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:39:07.319 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:39:07.319 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:39:07.319 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:39:07.319 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:39:07.577 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:39:07.577 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:39:07.577 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:39:07.577 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:39:07.577 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:39:07.577 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:39:07.577 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:39:08.511 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:39:08.511 00:39:08.511 real 0m8.478s 00:39:08.511 user 0m1.614s 00:39:08.511 sys 0m2.983s 00:39:08.511 02:44:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:08.511 02:44:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:39:08.511 ************************************ 00:39:08.511 END TEST nvmf_identify_kernel_target 00:39:08.511 ************************************ 00:39:08.511 02:44:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:39:08.511 02:44:58 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:39:08.511 02:44:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:08.511 02:44:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:08.511 02:44:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:08.511 ************************************ 00:39:08.511 START TEST nvmf_auth_host 00:39:08.511 ************************************ 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:39:08.512 * Looking for test storage... 00:39:08.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:39:08.512 02:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:10.411 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:39:10.412 Found 0000:08:00.0 (0x8086 - 0x159b) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:39:10.412 Found 0000:08:00.1 (0x8086 - 0x159b) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:39:10.412 Found net devices under 0000:08:00.0: cvl_0_0 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:39:10.412 Found net devices under 0000:08:00.1: cvl_0_1 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:10.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:10.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:39:10.412 00:39:10.412 --- 10.0.0.2 ping statistics --- 00:39:10.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.412 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:10.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:10.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:39:10.412 00:39:10.412 --- 10.0.0.1 ping statistics --- 00:39:10.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:10.412 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1964851 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1964851 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1964851 ']' 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:10.412 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f7f3fddeeabb8b985d70dca07bbc463a 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Gr2 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f7f3fddeeabb8b985d70dca07bbc463a 0 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f7f3fddeeabb8b985d70dca07bbc463a 0 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f7f3fddeeabb8b985d70dca07bbc463a 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Gr2 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Gr2 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Gr2 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=174fac59a76715fa1c7a85cce9617d8b0f5875694eb8ad247c6b4524d5a82e09 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5a6 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 174fac59a76715fa1c7a85cce9617d8b0f5875694eb8ad247c6b4524d5a82e09 3 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 174fac59a76715fa1c7a85cce9617d8b0f5875694eb8ad247c6b4524d5a82e09 3 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=174fac59a76715fa1c7a85cce9617d8b0f5875694eb8ad247c6b4524d5a82e09 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:39:10.670 02:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5a6 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5a6 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.5a6 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:39:10.670 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e7c413ebc67774a410d17a96778310199a1efbb527165dd2 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4ah 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e7c413ebc67774a410d17a96778310199a1efbb527165dd2 0 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e7c413ebc67774a410d17a96778310199a1efbb527165dd2 0 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e7c413ebc67774a410d17a96778310199a1efbb527165dd2 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4ah 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4ah 00:39:10.671 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4ah 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c065c8f1578024ec809ab2c9b013eded67dcff903485b2f4 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qNc 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c065c8f1578024ec809ab2c9b013eded67dcff903485b2f4 2 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c065c8f1578024ec809ab2c9b013eded67dcff903485b2f4 2 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c065c8f1578024ec809ab2c9b013eded67dcff903485b2f4 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qNc 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qNc 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.qNc 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cef1a246d439f68e9cf9e51d65b30f59 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Skx 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cef1a246d439f68e9cf9e51d65b30f59 1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cef1a246d439f68e9cf9e51d65b30f59 1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cef1a246d439f68e9cf9e51d65b30f59 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Skx 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Skx 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Skx 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=df3cce123459ad0895ad83f64acfbb2f 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BW3 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key df3cce123459ad0895ad83f64acfbb2f 1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 df3cce123459ad0895ad83f64acfbb2f 1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=df3cce123459ad0895ad83f64acfbb2f 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BW3 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BW3 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.BW3 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=21697f6c83b55ea275b77fb0d44ad50adf6b00af2da95b5e 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.koK 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 21697f6c83b55ea275b77fb0d44ad50adf6b00af2da95b5e 2 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 21697f6c83b55ea275b77fb0d44ad50adf6b00af2da95b5e 2 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=21697f6c83b55ea275b77fb0d44ad50adf6b00af2da95b5e 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.koK 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.koK 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.koK 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9b75b56b2a7724902d591eeef4f569c0 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Tkm 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9b75b56b2a7724902d591eeef4f569c0 0 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9b75b56b2a7724902d591eeef4f569c0 0 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9b75b56b2a7724902d591eeef4f569c0 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:39:10.928 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:11.185 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Tkm 00:39:11.185 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Tkm 00:39:11.185 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Tkm 00:39:11.185 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:39:11.185 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5b47bb3a39c984ee91b54e7eeae0e4f4186e5933e592d6cd3c0a3613e52c0a3f 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zGJ 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5b47bb3a39c984ee91b54e7eeae0e4f4186e5933e592d6cd3c0a3613e52c0a3f 3 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5b47bb3a39c984ee91b54e7eeae0e4f4186e5933e592d6cd3c0a3613e52c0a3f 3 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5b47bb3a39c984ee91b54e7eeae0e4f4186e5933e592d6cd3c0a3613e52c0a3f 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zGJ 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zGJ 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.zGJ 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1964851 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1964851 ']' 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:11.186 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Gr2 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.5a6 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5a6 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4ah 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.qNc ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qNc 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Skx 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.BW3 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BW3 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.koK 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Tkm ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Tkm 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zGJ 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:11.444 02:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:12.379 Waiting for block devices as requested 00:39:12.379 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:39:12.637 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:39:12.637 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:39:12.637 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:39:12.895 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:39:12.895 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:39:12.895 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:39:12.895 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:39:13.153 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:39:13.153 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:39:13.153 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:39:13.153 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:39:13.410 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:39:13.410 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:39:13.410 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:39:13.410 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:39:13.667 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:39:13.924 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:13.925 No valid GPT data, bailing 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:13.925 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:39:14.183 00:39:14.183 Discovery Log Number of Records 2, Generation counter 2 00:39:14.183 =====Discovery Log Entry 0====== 00:39:14.183 trtype: tcp 00:39:14.183 adrfam: ipv4 00:39:14.183 subtype: current discovery subsystem 00:39:14.183 treq: not specified, sq flow control disable supported 00:39:14.183 portid: 1 00:39:14.183 trsvcid: 4420 00:39:14.183 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:14.183 traddr: 10.0.0.1 00:39:14.183 eflags: none 00:39:14.183 sectype: none 00:39:14.183 =====Discovery Log Entry 1====== 00:39:14.183 trtype: tcp 00:39:14.183 adrfam: ipv4 00:39:14.183 subtype: nvme subsystem 00:39:14.183 treq: not specified, sq flow control disable supported 00:39:14.183 portid: 1 00:39:14.183 trsvcid: 4420 00:39:14.183 subnqn: nqn.2024-02.io.spdk:cnode0 00:39:14.183 traddr: 10.0.0.1 00:39:14.183 eflags: none 00:39:14.183 sectype: none 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.183 nvme0n1 00:39:14.183 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.442 nvme0n1 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.442 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:14.701 02:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:14.702 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.702 02:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.702 nvme0n1 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.702 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.961 nvme0n1 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:14.961 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.220 nvme0n1 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:15.220 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.221 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.480 nvme0n1 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.480 02:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.739 nvme0n1 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.739 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.997 nvme0n1 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:15.997 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.998 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.256 nvme0n1 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:16.256 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:16.257 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:16.257 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:16.257 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:16.257 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:16.257 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.257 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.515 nvme0n1 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.515 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.774 02:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.774 nvme0n1 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.774 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.107 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.388 nvme0n1 00:39:17.388 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.388 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:17.388 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:17.388 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.388 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.388 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.388 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:17.388 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.389 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.655 nvme0n1 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.655 02:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.655 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.222 nvme0n1 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.222 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.482 nvme0n1 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.482 02:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.741 nvme0n1 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:18.741 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.000 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.568 nvme0n1 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:19.568 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:19.569 02:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.136 nvme0n1 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.136 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.395 02:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.961 nvme0n1 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:20.961 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:20.962 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.529 nvme0n1 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:21.529 02:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.464 nvme0n1 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:22.464 02:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.395 nvme0n1 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.395 02:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.769 nvme0n1 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:24.769 02:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:24.769 02:45:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.143 nvme0n1 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:26.143 02:45:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.077 nvme0n1 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:27.077 02:45:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:27.078 02:45:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:27.078 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.078 02:45:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 nvme0n1 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 nvme0n1 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.454 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.713 nvme0n1 00:39:28.713 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.713 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.713 02:45:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.713 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.713 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.713 02:45:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.713 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.972 nvme0n1 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:28.972 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.231 nvme0n1 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.231 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.490 nvme0n1 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.490 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.749 nvme0n1 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.749 02:45:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:29.749 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:29.750 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.008 nvme0n1 00:39:30.008 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.009 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.267 nvme0n1 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.267 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.525 nvme0n1 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.525 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.526 02:45:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.784 nvme0n1 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.784 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.042 nvme0n1 00:39:31.042 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.042 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:31.042 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:31.042 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.042 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.042 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.300 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.558 nvme0n1 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:31.558 02:45:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:31.559 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:31.559 02:45:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.124 nvme0n1 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:32.124 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:32.125 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.125 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.382 nvme0n1 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:32.382 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.383 02:45:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.640 nvme0n1 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.641 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:32.899 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.465 nvme0n1 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:33.465 02:45:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.029 nvme0n1 00:39:34.029 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:34.029 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:34.029 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:34.029 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:34.029 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.029 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:34.287 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:34.287 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:34.287 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:34.287 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.287 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:34.287 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:34.287 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:34.288 02:45:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.854 nvme0n1 00:39:34.854 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:34.854 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:34.854 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:34.854 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:34.854 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.854 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:34.854 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:34.854 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:34.854 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:34.855 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.420 nvme0n1 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.420 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:35.678 02:45:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.243 nvme0n1 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.244 02:45:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.633 nvme0n1 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:37.633 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:37.634 02:45:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.606 nvme0n1 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:38.606 02:45:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.976 nvme0n1 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:39.977 02:45:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.906 nvme0n1 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.906 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.907 02:45:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.278 nvme0n1 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:42.278 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.279 nvme0n1 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.279 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.540 nvme0n1 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:42.540 02:45:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:42.541 02:45:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:42.541 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.541 02:45:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.801 nvme0n1 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.801 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.060 nvme0n1 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.060 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.320 nvme0n1 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.320 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.579 nvme0n1 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.579 02:45:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.836 nvme0n1 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:43.836 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:43.837 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.094 nvme0n1 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:44.094 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:44.095 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:44.095 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.095 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.352 nvme0n1 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.352 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.610 nvme0n1 00:39:44.610 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.610 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:44.610 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.610 02:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:44.610 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.610 02:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:44.610 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:44.611 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:44.611 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:44.868 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.127 nvme0n1 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.127 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.385 nvme0n1 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.385 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.643 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.643 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:45.643 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:39:45.643 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:45.643 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.644 02:45:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.902 nvme0n1 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.902 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.160 nvme0n1 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.160 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.418 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.676 nvme0n1 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:39:46.676 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:46.677 02:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.243 nvme0n1 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.243 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:47.501 02:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.066 nvme0n1 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:48.066 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.632 nvme0n1 00:39:48.632 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:48.632 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:48.632 02:45:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:48.632 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:48.632 02:45:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.632 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:48.632 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:48.632 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:48.632 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:48.632 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.889 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:48.889 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:48.890 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.455 nvme0n1 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.455 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:49.456 02:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.022 nvme0n1 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdmM2ZkZGVlYWJiOGI5ODVkNzBkY2EwN2JiYzQ2M2ERTljE: 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: ]] 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTc0ZmFjNTlhNzY3MTVmYTFjN2E4NWNjZTk2MTdkOGIwZjU4NzU2OTRlYjhhZDI0N2M2YjQ1MjRkNWE4MmUwOfKHFhs=: 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.022 02:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.396 nvme0n1 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.396 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:51.397 02:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.331 nvme0n1 00:39:52.331 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.331 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:52.331 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:52.331 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.331 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.589 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.589 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:52.589 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2VmMWEyNDZkNDM5ZjY4ZTljZjllNTFkNjViMzBmNTn+6uqb: 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: ]] 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGYzY2NlMTIzNDU5YWQwODk1YWQ4M2Y2NGFjZmJiMmb4iSOQ: 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:52.590 02:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.523 nvme0n1 00:39:53.523 02:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.523 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:53.523 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:53.523 02:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.523 02:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjE2OTdmNmM4M2I1NWVhMjc1Yjc3ZmIwZDQ0YWQ1MGFkZjZiMDBhZjJkYTk1YjVlHcfgoQ==: 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: ]] 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI3NWI1NmIyYTc3MjQ5MDJkNTkxZWVlZjRmNTY5YzD9iGvT: 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.779 02:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.779 02:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.779 02:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:53.779 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:53.779 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:53.779 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.780 02:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.153 nvme0n1 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWI0N2JiM2EzOWM5ODRlZTkxYjU0ZTdlZWFlMGU0ZjQxODZlNTkzM2U1OTJkNmNkM2MwYTM2MTNlNTJjMGEzZktIJso=: 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:55.153 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:55.154 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:55.154 02:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:55.154 02:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:55.154 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:55.154 02:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.088 nvme0n1 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTdjNDEzZWJjNjc3NzRhNDEwZDE3YTk2Nzc4MzEwMTk5YTFlZmJiNTI3MTY1ZGQyCLbzUA==: 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: ]] 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzA2NWM4ZjE1NzgwMjRlYzgwOWFiMmM5YjAxM2VkZWQ2N2RjZmY5MDM0ODViMmY0WdKPoA==: 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:56.088 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:56.089 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:56.089 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:56.089 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:56.089 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:56.089 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.089 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.348 request: 00:39:56.348 { 00:39:56.348 "name": "nvme0", 00:39:56.348 "trtype": "tcp", 00:39:56.348 "traddr": "10.0.0.1", 00:39:56.348 "adrfam": "ipv4", 00:39:56.348 "trsvcid": "4420", 00:39:56.348 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:56.348 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:56.348 "prchk_reftag": false, 00:39:56.348 "prchk_guard": false, 00:39:56.348 "hdgst": false, 00:39:56.348 "ddgst": false, 00:39:56.348 "method": "bdev_nvme_attach_controller", 00:39:56.348 "req_id": 1 00:39:56.348 } 00:39:56.348 Got JSON-RPC error response 00:39:56.348 response: 00:39:56.348 { 00:39:56.348 "code": -5, 00:39:56.348 "message": "Input/output error" 00:39:56.348 } 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.348 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.348 request: 00:39:56.348 { 00:39:56.348 "name": "nvme0", 00:39:56.348 "trtype": "tcp", 00:39:56.348 "traddr": "10.0.0.1", 00:39:56.348 "adrfam": "ipv4", 00:39:56.348 "trsvcid": "4420", 00:39:56.349 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:56.349 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:56.349 "prchk_reftag": false, 00:39:56.349 "prchk_guard": false, 00:39:56.349 "hdgst": false, 00:39:56.349 "ddgst": false, 00:39:56.349 "dhchap_key": "key2", 00:39:56.349 "method": "bdev_nvme_attach_controller", 00:39:56.349 "req_id": 1 00:39:56.349 } 00:39:56.349 Got JSON-RPC error response 00:39:56.349 response: 00:39:56.349 { 00:39:56.349 "code": -5, 00:39:56.349 "message": "Input/output error" 00:39:56.349 } 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:56.349 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.608 request: 00:39:56.608 { 00:39:56.608 "name": "nvme0", 00:39:56.608 "trtype": "tcp", 00:39:56.608 "traddr": "10.0.0.1", 00:39:56.608 "adrfam": "ipv4", 00:39:56.608 "trsvcid": "4420", 00:39:56.608 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:56.608 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:56.608 "prchk_reftag": false, 00:39:56.608 "prchk_guard": false, 00:39:56.608 "hdgst": false, 00:39:56.608 "ddgst": false, 00:39:56.608 "dhchap_key": "key1", 00:39:56.608 "dhchap_ctrlr_key": "ckey2", 00:39:56.608 "method": "bdev_nvme_attach_controller", 00:39:56.608 "req_id": 1 00:39:56.608 } 00:39:56.608 Got JSON-RPC error response 00:39:56.608 response: 00:39:56.608 { 00:39:56.608 "code": -5, 00:39:56.608 "message": "Input/output error" 00:39:56.608 } 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:56.608 rmmod nvme_tcp 00:39:56.608 rmmod nvme_fabrics 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1964851 ']' 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1964851 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1964851 ']' 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1964851 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1964851 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1964851' 00:39:56.608 killing process with pid 1964851 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1964851 00:39:56.608 02:45:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1964851 00:39:56.608 02:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:56.608 02:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:56.608 02:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:56.608 02:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:56.608 02:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:56.608 02:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:56.608 02:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:56.608 02:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:39:59.143 02:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:59.806 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:39:59.806 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:39:59.806 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:39:59.806 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:39:59.806 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:39:59.806 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:40:00.066 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:40:00.066 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:40:00.066 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:40:00.066 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:40:00.066 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:40:00.066 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:40:00.066 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:40:00.066 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:40:00.066 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:40:00.066 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:40:01.004 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:40:01.004 02:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Gr2 /tmp/spdk.key-null.4ah /tmp/spdk.key-sha256.Skx /tmp/spdk.key-sha384.koK /tmp/spdk.key-sha512.zGJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:40:01.004 02:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:01.938 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:40:01.938 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:01.939 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:40:01.939 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:40:01.939 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:40:01.939 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:40:01.939 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:40:01.939 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:40:01.939 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:40:01.939 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:40:01.939 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:40:01.939 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:40:01.939 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:40:01.939 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:40:01.939 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:40:01.939 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:40:01.939 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:40:01.939 00:40:01.939 real 0m53.500s 00:40:01.939 user 0m50.888s 00:40:01.939 sys 0m5.188s 00:40:01.939 02:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:01.939 02:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.939 ************************************ 00:40:01.939 END TEST nvmf_auth_host 00:40:01.939 ************************************ 00:40:01.939 02:45:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:40:01.939 02:45:52 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:40:01.939 02:45:52 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:40:01.939 02:45:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:01.939 02:45:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:01.939 02:45:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:02.197 ************************************ 00:40:02.197 START TEST nvmf_digest 00:40:02.197 ************************************ 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:40:02.197 * Looking for test storage... 00:40:02.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.197 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:02.198 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:02.198 02:45:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:40:02.198 02:45:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:40:04.101 Found 0000:08:00.0 (0x8086 - 0x159b) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:40:04.101 Found 0000:08:00.1 (0x8086 - 0x159b) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:40:04.101 Found net devices under 0000:08:00.0: cvl_0_0 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:40:04.101 Found net devices under 0000:08:00.1: cvl_0_1 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:04.101 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:04.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:04.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:40:04.101 00:40:04.101 --- 10.0.0.2 ping statistics --- 00:40:04.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.102 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:04.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:04.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:40:04.102 00:40:04.102 --- 10.0.0.1 ping statistics --- 00:40:04.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:04.102 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:40:04.102 ************************************ 00:40:04.102 START TEST nvmf_digest_clean 00:40:04.102 ************************************ 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1973364 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1973364 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1973364 ']' 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:04.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:04.102 [2024-07-11 02:45:54.237922] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:04.102 [2024-07-11 02:45:54.238023] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:04.102 EAL: No free 2048 kB hugepages reported on node 1 00:40:04.102 [2024-07-11 02:45:54.304547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.102 [2024-07-11 02:45:54.394050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:04.102 [2024-07-11 02:45:54.394112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:04.102 [2024-07-11 02:45:54.394128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:04.102 [2024-07-11 02:45:54.394141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:04.102 [2024-07-11 02:45:54.394161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:04.102 [2024-07-11 02:45:54.394192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:04.102 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:04.360 null0 00:40:04.360 [2024-07-11 02:45:54.613898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:04.360 [2024-07-11 02:45:54.638074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1973447 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1973447 /var/tmp/bperf.sock 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1973447 ']' 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:04.360 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:04.361 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:04.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:04.361 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:04.361 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:04.361 [2024-07-11 02:45:54.691785] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:04.361 [2024-07-11 02:45:54.691884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973447 ] 00:40:04.361 EAL: No free 2048 kB hugepages reported on node 1 00:40:04.361 [2024-07-11 02:45:54.754458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.618 [2024-07-11 02:45:54.845997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.618 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:04.618 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:40:04.618 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:04.618 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:04.619 02:45:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:05.184 02:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:05.184 02:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:05.441 nvme0n1 00:40:05.441 02:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:05.441 02:45:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:05.698 Running I/O for 2 seconds... 00:40:07.594 00:40:07.594 Latency(us) 00:40:07.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:07.594 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:07.594 nvme0n1 : 2.00 17493.13 68.33 0.00 0.00 7307.34 3956.43 13398.47 00:40:07.594 =================================================================================================================== 00:40:07.594 Total : 17493.13 68.33 0.00 0.00 7307.34 3956.43 13398.47 00:40:07.594 0 00:40:07.594 02:45:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:07.594 02:45:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:07.594 02:45:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:07.594 02:45:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:07.594 | select(.opcode=="crc32c") 00:40:07.594 | "\(.module_name) \(.executed)"' 00:40:07.594 02:45:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1973447 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1973447 ']' 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1973447 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1973447 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1973447' 00:40:07.852 killing process with pid 1973447 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1973447 00:40:07.852 Received shutdown signal, test time was about 2.000000 seconds 00:40:07.852 00:40:07.852 Latency(us) 00:40:07.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:07.852 =================================================================================================================== 00:40:07.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:07.852 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1973447 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1973782 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1973782 /var/tmp/bperf.sock 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1973782 ']' 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:08.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:08.111 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:08.111 [2024-07-11 02:45:58.463486] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:08.111 [2024-07-11 02:45:58.463598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973782 ] 00:40:08.111 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:08.111 Zero copy mechanism will not be used. 00:40:08.111 EAL: No free 2048 kB hugepages reported on node 1 00:40:08.111 [2024-07-11 02:45:58.523225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:08.369 [2024-07-11 02:45:58.610876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:08.369 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:08.369 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:40:08.369 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:08.369 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:08.369 02:45:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:08.935 02:45:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:08.935 02:45:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:09.193 nvme0n1 00:40:09.193 02:45:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:09.193 02:45:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:09.193 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:09.193 Zero copy mechanism will not be used. 00:40:09.193 Running I/O for 2 seconds... 00:40:11.720 00:40:11.720 Latency(us) 00:40:11.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:11.720 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:40:11.720 nvme0n1 : 2.00 4688.87 586.11 0.00 0.00 3407.75 767.62 5121.52 00:40:11.720 =================================================================================================================== 00:40:11.720 Total : 4688.87 586.11 0.00 0.00 3407.75 767.62 5121.52 00:40:11.720 0 00:40:11.720 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:11.720 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:11.720 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:11.720 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:11.720 | select(.opcode=="crc32c") 00:40:11.720 | "\(.module_name) \(.executed)"' 00:40:11.720 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:11.720 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:11.720 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:11.720 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:11.720 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1973782 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1973782 ']' 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1973782 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1973782 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1973782' 00:40:11.721 killing process with pid 1973782 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1973782 00:40:11.721 Received shutdown signal, test time was about 2.000000 seconds 00:40:11.721 00:40:11.721 Latency(us) 00:40:11.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:11.721 =================================================================================================================== 00:40:11.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:11.721 02:46:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1973782 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1974099 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1974099 /var/tmp/bperf.sock 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1974099 ']' 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:11.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:11.721 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:11.721 [2024-07-11 02:46:02.130239] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:11.721 [2024-07-11 02:46:02.130341] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974099 ] 00:40:11.979 EAL: No free 2048 kB hugepages reported on node 1 00:40:11.979 [2024-07-11 02:46:02.189732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.979 [2024-07-11 02:46:02.277199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:11.979 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:11.979 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:40:11.979 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:11.979 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:11.979 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:12.544 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:12.544 02:46:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:13.109 nvme0n1 00:40:13.109 02:46:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:13.109 02:46:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:13.109 Running I/O for 2 seconds... 00:40:15.009 00:40:15.010 Latency(us) 00:40:15.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.010 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:15.010 nvme0n1 : 2.01 17307.46 67.61 0.00 0.00 7377.42 6844.87 14078.10 00:40:15.010 =================================================================================================================== 00:40:15.010 Total : 17307.46 67.61 0.00 0.00 7377.42 6844.87 14078.10 00:40:15.010 0 00:40:15.010 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:15.010 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:15.010 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:15.010 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:15.010 | select(.opcode=="crc32c") 00:40:15.010 | "\(.module_name) \(.executed)"' 00:40:15.010 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1974099 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1974099 ']' 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1974099 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1974099 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1974099' 00:40:15.575 killing process with pid 1974099 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1974099 00:40:15.575 Received shutdown signal, test time was about 2.000000 seconds 00:40:15.575 00:40:15.575 Latency(us) 00:40:15.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.575 =================================================================================================================== 00:40:15.575 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1974099 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1974496 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1974496 /var/tmp/bperf.sock 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1974496 ']' 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:15.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:15.575 02:46:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:15.575 [2024-07-11 02:46:05.962158] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:15.575 [2024-07-11 02:46:05.962258] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974496 ] 00:40:15.575 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:15.575 Zero copy mechanism will not be used. 00:40:15.575 EAL: No free 2048 kB hugepages reported on node 1 00:40:15.833 [2024-07-11 02:46:06.022402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.833 [2024-07-11 02:46:06.109816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.833 02:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:15.833 02:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:40:15.833 02:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:15.833 02:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:15.833 02:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:16.399 02:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:16.399 02:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:16.657 nvme0n1 00:40:16.657 02:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:16.657 02:46:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:16.916 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:16.916 Zero copy mechanism will not be used. 00:40:16.916 Running I/O for 2 seconds... 00:40:18.816 00:40:18.817 Latency(us) 00:40:18.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.817 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:40:18.817 nvme0n1 : 2.00 5618.61 702.33 0.00 0.00 2838.61 2111.72 6092.42 00:40:18.817 =================================================================================================================== 00:40:18.817 Total : 5618.61 702.33 0.00 0.00 2838.61 2111.72 6092.42 00:40:18.817 0 00:40:18.817 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:18.817 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:18.817 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:18.817 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:18.817 | select(.opcode=="crc32c") 00:40:18.817 | "\(.module_name) \(.executed)"' 00:40:18.817 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1974496 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1974496 ']' 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1974496 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1974496 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1974496' 00:40:19.075 killing process with pid 1974496 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1974496 00:40:19.075 Received shutdown signal, test time was about 2.000000 seconds 00:40:19.075 00:40:19.075 Latency(us) 00:40:19.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:19.075 =================================================================================================================== 00:40:19.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:19.075 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1974496 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1973364 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1973364 ']' 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1973364 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1973364 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1973364' 00:40:19.334 killing process with pid 1973364 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1973364 00:40:19.334 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1973364 00:40:19.593 00:40:19.593 real 0m15.629s 00:40:19.593 user 0m31.573s 00:40:19.593 sys 0m4.245s 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:19.593 ************************************ 00:40:19.593 END TEST nvmf_digest_clean 00:40:19.593 ************************************ 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:40:19.593 ************************************ 00:40:19.593 START TEST nvmf_digest_error 00:40:19.593 ************************************ 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1974831 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1974831 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1974831 ']' 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:19.593 02:46:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:19.593 [2024-07-11 02:46:09.918576] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:19.593 [2024-07-11 02:46:09.918668] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:19.593 EAL: No free 2048 kB hugepages reported on node 1 00:40:19.593 [2024-07-11 02:46:09.982305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.850 [2024-07-11 02:46:10.068922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:19.851 [2024-07-11 02:46:10.068983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:19.851 [2024-07-11 02:46:10.069001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:19.851 [2024-07-11 02:46:10.069016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:19.851 [2024-07-11 02:46:10.069029] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:19.851 [2024-07-11 02:46:10.069059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:19.851 [2024-07-11 02:46:10.197797] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:19.851 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:20.107 null0 00:40:20.107 [2024-07-11 02:46:10.296137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.107 [2024-07-11 02:46:10.320345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1974945 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1974945 /var/tmp/bperf.sock 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1974945 ']' 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:20.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:20.107 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:20.107 [2024-07-11 02:46:10.371013] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:20.107 [2024-07-11 02:46:10.371106] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974945 ] 00:40:20.107 EAL: No free 2048 kB hugepages reported on node 1 00:40:20.107 [2024-07-11 02:46:10.431242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.107 [2024-07-11 02:46:10.518774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.365 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:20.365 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:40:20.365 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:20.365 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:20.623 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:20.623 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:20.623 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:20.623 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:20.623 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:20.623 02:46:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:21.189 nvme0n1 00:40:21.189 02:46:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:40:21.189 02:46:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:21.189 02:46:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:21.190 02:46:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:21.190 02:46:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:21.190 02:46:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:21.190 Running I/O for 2 seconds... 00:40:21.190 [2024-07-11 02:46:11.511367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.190 [2024-07-11 02:46:11.511425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.190 [2024-07-11 02:46:11.511454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.190 [2024-07-11 02:46:11.524258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.190 [2024-07-11 02:46:11.524296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.190 [2024-07-11 02:46:11.524326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.190 [2024-07-11 02:46:11.541418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.190 [2024-07-11 02:46:11.541456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.190 [2024-07-11 02:46:11.541485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.190 [2024-07-11 02:46:11.554824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.190 [2024-07-11 02:46:11.554860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.190 [2024-07-11 02:46:11.554889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.190 [2024-07-11 02:46:11.574090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.190 [2024-07-11 02:46:11.574128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.190 [2024-07-11 02:46:11.574159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.190 [2024-07-11 02:46:11.590076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.190 [2024-07-11 02:46:11.590123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.190 [2024-07-11 02:46:11.590153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.190 [2024-07-11 02:46:11.603996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.190 [2024-07-11 02:46:11.604034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.190 [2024-07-11 02:46:11.604063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.448 [2024-07-11 02:46:11.621427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.448 [2024-07-11 02:46:11.621479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.448 [2024-07-11 02:46:11.621517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.448 [2024-07-11 02:46:11.634398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.448 [2024-07-11 02:46:11.634435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.448 [2024-07-11 02:46:11.634464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.448 [2024-07-11 02:46:11.651818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.448 [2024-07-11 02:46:11.651856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.448 [2024-07-11 02:46:11.651885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.448 [2024-07-11 02:46:11.665642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.448 [2024-07-11 02:46:11.665678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.448 [2024-07-11 02:46:11.665708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.448 [2024-07-11 02:46:11.680911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.448 [2024-07-11 02:46:11.680948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.448 [2024-07-11 02:46:11.680977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.448 [2024-07-11 02:46:11.695670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.448 [2024-07-11 02:46:11.695707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.448 [2024-07-11 02:46:11.695737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.448 [2024-07-11 02:46:11.711031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.448 [2024-07-11 02:46:11.711068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.448 [2024-07-11 02:46:11.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.448 [2024-07-11 02:46:11.724734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.448 [2024-07-11 02:46:11.724770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.448 [2024-07-11 02:46:11.724798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.448 [2024-07-11 02:46:11.741277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.448 [2024-07-11 02:46:11.741314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.448 [2024-07-11 02:46:11.741343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.449 [2024-07-11 02:46:11.754964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.449 [2024-07-11 02:46:11.755001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.449 [2024-07-11 02:46:11.755031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.449 [2024-07-11 02:46:11.772073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.449 [2024-07-11 02:46:11.772110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.449 [2024-07-11 02:46:11.772139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.449 [2024-07-11 02:46:11.787959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.449 [2024-07-11 02:46:11.787998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.449 [2024-07-11 02:46:11.788028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.449 [2024-07-11 02:46:11.800878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.449 [2024-07-11 02:46:11.800914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.449 [2024-07-11 02:46:11.800944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.449 [2024-07-11 02:46:11.815090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.449 [2024-07-11 02:46:11.815126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.449 [2024-07-11 02:46:11.815156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.449 [2024-07-11 02:46:11.830065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.449 [2024-07-11 02:46:11.830110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.449 [2024-07-11 02:46:11.830141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.449 [2024-07-11 02:46:11.845436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.449 [2024-07-11 02:46:11.845481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.449 [2024-07-11 02:46:11.845519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.449 [2024-07-11 02:46:11.859836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.449 [2024-07-11 02:46:11.859873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.449 [2024-07-11 02:46:11.859902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.707 [2024-07-11 02:46:11.874282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.707 [2024-07-11 02:46:11.874347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.707 [2024-07-11 02:46:11.874379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.707 [2024-07-11 02:46:11.889261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.707 [2024-07-11 02:46:11.889299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.707 [2024-07-11 02:46:11.889328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.707 [2024-07-11 02:46:11.902227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.707 [2024-07-11 02:46:11.902262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.707 [2024-07-11 02:46:11.902292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.707 [2024-07-11 02:46:11.918439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.707 [2024-07-11 02:46:11.918477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.707 [2024-07-11 02:46:11.918506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.707 [2024-07-11 02:46:11.936062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.707 [2024-07-11 02:46:11.936099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.707 [2024-07-11 02:46:11.936129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.707 [2024-07-11 02:46:11.955410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.707 [2024-07-11 02:46:11.955446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.707 [2024-07-11 02:46:11.955476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.707 [2024-07-11 02:46:11.968184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.707 [2024-07-11 02:46:11.968220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.707 [2024-07-11 02:46:11.968250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.707 [2024-07-11 02:46:11.984406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:11.984444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:11.984474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.708 [2024-07-11 02:46:12.001681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:12.001718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:12.001747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.708 [2024-07-11 02:46:12.015711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:12.015747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:12.015777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.708 [2024-07-11 02:46:12.031715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:12.031751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:12.031780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.708 [2024-07-11 02:46:12.046617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:12.046652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:12.046682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.708 [2024-07-11 02:46:12.059552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:12.059588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:12.059618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.708 [2024-07-11 02:46:12.073170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:12.073206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:12.073235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.708 [2024-07-11 02:46:12.088825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:12.088861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:12.088891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.708 [2024-07-11 02:46:12.105610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:12.105647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:12.105687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.708 [2024-07-11 02:46:12.121456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.708 [2024-07-11 02:46:12.121520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.708 [2024-07-11 02:46:12.121552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.972 [2024-07-11 02:46:12.133793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.972 [2024-07-11 02:46:12.133831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.972 [2024-07-11 02:46:12.133861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.972 [2024-07-11 02:46:12.151851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.972 [2024-07-11 02:46:12.151888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.972 [2024-07-11 02:46:12.151918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.972 [2024-07-11 02:46:12.169530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.972 [2024-07-11 02:46:12.169575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.972 [2024-07-11 02:46:12.169605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.972 [2024-07-11 02:46:12.186276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.972 [2024-07-11 02:46:12.186323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.972 [2024-07-11 02:46:12.186353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.972 [2024-07-11 02:46:12.198621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.972 [2024-07-11 02:46:12.198666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.198696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.214884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.214921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.214951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.229967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.230003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.230034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.244637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.244673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.244702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.258679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.258714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.258744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.273616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.273651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.273681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.286989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.287032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.287061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.303880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.303916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.303975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.317408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.317443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.317472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.330934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.330969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.330999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.347036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.347072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.347102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.360573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.360608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.360650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.973 [2024-07-11 02:46:12.378477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:21.973 [2024-07-11 02:46:12.378523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.973 [2024-07-11 02:46:12.378555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.395621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.395674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.395705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.409002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.409056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.409088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.424611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.424647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.424676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.440417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.440453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.440483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.454798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.454835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.454865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.467637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.467672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.467703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.484440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.484476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.484507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.499620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.499666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.499697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.515596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.515632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.515662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.532725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.532774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.532808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.546465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.546501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.546539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.564759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.564795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.564826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.578118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.578154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.578184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.594291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.594327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.594357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.607521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.607557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.607607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.624109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.624145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.624175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.639278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.639316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.639346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.653135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.653169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.653199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.257 [2024-07-11 02:46:12.667617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.257 [2024-07-11 02:46:12.667671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.257 [2024-07-11 02:46:12.667704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.681993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.682029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.682073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.697561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.697596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.697626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.712020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.712078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.712112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.725248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.725284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.725313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.740102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.740137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.740166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.755659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.755694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.755731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.768907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.768942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.768972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.783562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.783596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.783625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.800499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.800551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.800589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.813165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.813200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.813230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.831731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.831767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.831797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.845290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.845332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.524 [2024-07-11 02:46:12.845362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.524 [2024-07-11 02:46:12.860965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.524 [2024-07-11 02:46:12.861000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.525 [2024-07-11 02:46:12.861030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.525 [2024-07-11 02:46:12.879900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.525 [2024-07-11 02:46:12.879936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.525 [2024-07-11 02:46:12.879965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.525 [2024-07-11 02:46:12.895339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.525 [2024-07-11 02:46:12.895380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.525 [2024-07-11 02:46:12.895410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.525 [2024-07-11 02:46:12.907925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.525 [2024-07-11 02:46:12.907960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.525 [2024-07-11 02:46:12.907990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.525 [2024-07-11 02:46:12.923491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.525 [2024-07-11 02:46:12.923539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.525 [2024-07-11 02:46:12.923571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.525 [2024-07-11 02:46:12.938243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.525 [2024-07-11 02:46:12.938294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.525 [2024-07-11 02:46:12.938325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.781 [2024-07-11 02:46:12.951852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.781 [2024-07-11 02:46:12.951887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.781 [2024-07-11 02:46:12.951917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.781 [2024-07-11 02:46:12.966677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.781 [2024-07-11 02:46:12.966720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.781 [2024-07-11 02:46:12.966748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.781 [2024-07-11 02:46:12.980962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.781 [2024-07-11 02:46:12.981003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.781 [2024-07-11 02:46:12.981033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.781 [2024-07-11 02:46:12.994782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.781 [2024-07-11 02:46:12.994816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.781 [2024-07-11 02:46:12.994846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.781 [2024-07-11 02:46:13.009305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.781 [2024-07-11 02:46:13.009340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.781 [2024-07-11 02:46:13.009371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.781 [2024-07-11 02:46:13.027581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.781 [2024-07-11 02:46:13.027625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.781 [2024-07-11 02:46:13.027655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.781 [2024-07-11 02:46:13.041583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.781 [2024-07-11 02:46:13.041618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.781 [2024-07-11 02:46:13.041647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.781 [2024-07-11 02:46:13.059779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.782 [2024-07-11 02:46:13.059815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.782 [2024-07-11 02:46:13.059844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.782 [2024-07-11 02:46:13.072933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.782 [2024-07-11 02:46:13.072969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.782 [2024-07-11 02:46:13.073000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.782 [2024-07-11 02:46:13.089781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.782 [2024-07-11 02:46:13.089816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.782 [2024-07-11 02:46:13.089845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.782 [2024-07-11 02:46:13.106891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.782 [2024-07-11 02:46:13.106961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.782 [2024-07-11 02:46:13.107005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.782 [2024-07-11 02:46:13.120558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.782 [2024-07-11 02:46:13.120594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.782 [2024-07-11 02:46:13.120624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.782 [2024-07-11 02:46:13.137541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.782 [2024-07-11 02:46:13.137577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.782 [2024-07-11 02:46:13.137605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.782 [2024-07-11 02:46:13.151043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.782 [2024-07-11 02:46:13.151086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.782 [2024-07-11 02:46:13.151128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.782 [2024-07-11 02:46:13.171908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.782 [2024-07-11 02:46:13.171950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.782 [2024-07-11 02:46:13.171982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.782 [2024-07-11 02:46:13.189997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:22.782 [2024-07-11 02:46:13.190032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.782 [2024-07-11 02:46:13.190061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.038 [2024-07-11 02:46:13.208991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.038 [2024-07-11 02:46:13.209027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.038 [2024-07-11 02:46:13.209057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.038 [2024-07-11 02:46:13.224741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.038 [2024-07-11 02:46:13.224777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.038 [2024-07-11 02:46:13.224806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.038 [2024-07-11 02:46:13.238957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.038 [2024-07-11 02:46:13.238993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.239037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.255844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.255895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.255926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.269297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.269334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.269369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.287232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.287268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.287298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.303667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.303704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.303733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.317253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.317289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.317318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.334023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.334059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.334089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.347169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.347216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.347245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.364754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.364790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.364821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.380065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.380102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.380131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.394355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.394391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.394421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.410035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.410071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.410100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.424404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.424440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.424478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.439778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.439813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.439843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.039 [2024-07-11 02:46:13.453934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.039 [2024-07-11 02:46:13.453969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.039 [2024-07-11 02:46:13.453998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.296 [2024-07-11 02:46:13.468994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.296 [2024-07-11 02:46:13.469039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.296 [2024-07-11 02:46:13.469078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.297 [2024-07-11 02:46:13.484255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf366d0) 00:40:23.297 [2024-07-11 02:46:13.484290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.297 [2024-07-11 02:46:13.484319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.297 00:40:23.297 Latency(us) 00:40:23.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.297 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:23.297 nvme0n1 : 2.01 16631.87 64.97 0.00 0.00 7683.54 4004.98 28544.57 00:40:23.297 =================================================================================================================== 00:40:23.297 Total : 16631.87 64.97 0.00 0.00 7683.54 4004.98 28544.57 00:40:23.297 0 00:40:23.297 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:23.297 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:23.297 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:23.297 | .driver_specific 00:40:23.297 | .nvme_error 00:40:23.297 | .status_code 00:40:23.297 | .command_transient_transport_error' 00:40:23.297 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 130 > 0 )) 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1974945 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1974945 ']' 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1974945 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1974945 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1974945' 00:40:23.554 killing process with pid 1974945 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1974945 00:40:23.554 Received shutdown signal, test time was about 2.000000 seconds 00:40:23.554 00:40:23.554 Latency(us) 00:40:23.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.554 =================================================================================================================== 00:40:23.554 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:23.554 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1974945 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1975263 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1975263 /var/tmp/bperf.sock 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1975263 ']' 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:23.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:23.811 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:23.812 02:46:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:23.812 [2024-07-11 02:46:14.037706] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:23.812 [2024-07-11 02:46:14.037798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975263 ] 00:40:23.812 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:23.812 Zero copy mechanism will not be used. 00:40:23.812 EAL: No free 2048 kB hugepages reported on node 1 00:40:23.812 [2024-07-11 02:46:14.096836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.812 [2024-07-11 02:46:14.184110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:24.068 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:24.068 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:40:24.068 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:24.068 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:24.325 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:24.325 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.325 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:24.325 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.325 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:24.325 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:24.582 nvme0n1 00:40:24.582 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:40:24.582 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.582 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:24.582 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.582 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:24.582 02:46:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:24.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:24.582 Zero copy mechanism will not be used. 00:40:24.582 Running I/O for 2 seconds... 00:40:24.582 [2024-07-11 02:46:14.964309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.582 [2024-07-11 02:46:14.964366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.582 [2024-07-11 02:46:14.964396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.582 [2024-07-11 02:46:14.969083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.582 [2024-07-11 02:46:14.969122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.582 [2024-07-11 02:46:14.969152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.582 [2024-07-11 02:46:14.974577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.582 [2024-07-11 02:46:14.974614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.582 [2024-07-11 02:46:14.974650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.582 [2024-07-11 02:46:14.980264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.582 [2024-07-11 02:46:14.980302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.582 [2024-07-11 02:46:14.980333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.582 [2024-07-11 02:46:14.986381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.582 [2024-07-11 02:46:14.986419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.582 [2024-07-11 02:46:14.986449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.582 [2024-07-11 02:46:14.992306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.582 [2024-07-11 02:46:14.992342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.582 [2024-07-11 02:46:14.992372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.582 [2024-07-11 02:46:14.997993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.582 [2024-07-11 02:46:14.998029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.582 [2024-07-11 02:46:14.998059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.582 [2024-07-11 02:46:15.002079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.582 [2024-07-11 02:46:15.002116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.582 [2024-07-11 02:46:15.002145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.840 [2024-07-11 02:46:15.007094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.840 [2024-07-11 02:46:15.007132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.840 [2024-07-11 02:46:15.007162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.840 [2024-07-11 02:46:15.013086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.840 [2024-07-11 02:46:15.013123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.840 [2024-07-11 02:46:15.013152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.840 [2024-07-11 02:46:15.019193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.840 [2024-07-11 02:46:15.019229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.840 [2024-07-11 02:46:15.019260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.840 [2024-07-11 02:46:15.024852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.840 [2024-07-11 02:46:15.024889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.840 [2024-07-11 02:46:15.024918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.840 [2024-07-11 02:46:15.031066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.840 [2024-07-11 02:46:15.031104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.031133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.035909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.035946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.035984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.039529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.039566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.039595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.043963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.044000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.044029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.048657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.048694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.048724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.052172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.052208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.052236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.056993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.057029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.057058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.061725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.061761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.061790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.064979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.065014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.065042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.069241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.069276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.069306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.072941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.072983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.073012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.076581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.076617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.076646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.080270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.080305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.080335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.084711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.084748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.084776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.088250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.088285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.088314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.093180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.093217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.093246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.100456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.100493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.100532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.106397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.106433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.106468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.112123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.112159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.112190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.117753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.117788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.117817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.122372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.122408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.122437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.125622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.125657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.125687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.129500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.129545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.129574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.134371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.134408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.134438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.138305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.138341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.138370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.143237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.143274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.143303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.149342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.149377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.149406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.155376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.155413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.155451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.161507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.161549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.161579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.168498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.168542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.168572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.174609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.174645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.174674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.180969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.181004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.181034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.186807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.186843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.186873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.193096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.193131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.193161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.199225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.199261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.199290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.205777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.205814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.205843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.212281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.212324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.212354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.218432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.218469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.218497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.224903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.224939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.224968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.231493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.231537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.231567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.238130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.238166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.238194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.244507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.244549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.244579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.251120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.251156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.251186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:24.841 [2024-07-11 02:46:15.255832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:24.841 [2024-07-11 02:46:15.255867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:24.841 [2024-07-11 02:46:15.255897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.260790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.260831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.260861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.266724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.266762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.266791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.272225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.272261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.272290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.278358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.278394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.278422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.284263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.284298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.284328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.289733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.289767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.289796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.295151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.295186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.295215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.300650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.300685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.300714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.306076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.306110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.306139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.310892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.310928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.310972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.314900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.314936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.314965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.318820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.318855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.318886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.323315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.323351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.323379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.328012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.328049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.328078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.331525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.331559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.331589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.337083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.337118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.337147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.342762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.342797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.342826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.348358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.348396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.348425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.354007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.354043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.354072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.359505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.359563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.359592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.365407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.365458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.365487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.370229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.370266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.370295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.375929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.375970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.375998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.380623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.380670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.380699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.384055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.384091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.384119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.388543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.388579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.388609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.392756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.392793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.392832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.395968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.396002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.396031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.400046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.400084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.400113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.405230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.405265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.405293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.410947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.410982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.411010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.100 [2024-07-11 02:46:15.416600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.100 [2024-07-11 02:46:15.416635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.100 [2024-07-11 02:46:15.416663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.422867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.422908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.422937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.428582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.428617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.428651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.434227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.434264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.434292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.440018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.440065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.440094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.446189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.446227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.446256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.451999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.452037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.452067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.457838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.457877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.457907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.463758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.463797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.463828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.469497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.469541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.469578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.474955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.475004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.475033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.480709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.480746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.480775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.486297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.486334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.486362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.491819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.491856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.491883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.497453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.497490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.497534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.503179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.503215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.503244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.509065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.509104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.509140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.101 [2024-07-11 02:46:15.516084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.101 [2024-07-11 02:46:15.516124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.101 [2024-07-11 02:46:15.516153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.359 [2024-07-11 02:46:15.522255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.359 [2024-07-11 02:46:15.522295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.359 [2024-07-11 02:46:15.522325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.359 [2024-07-11 02:46:15.528457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.359 [2024-07-11 02:46:15.528506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.359 [2024-07-11 02:46:15.528551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.359 [2024-07-11 02:46:15.534484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.359 [2024-07-11 02:46:15.534531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.359 [2024-07-11 02:46:15.534568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.359 [2024-07-11 02:46:15.540870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.359 [2024-07-11 02:46:15.540907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.359 [2024-07-11 02:46:15.540946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.359 [2024-07-11 02:46:15.545591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.359 [2024-07-11 02:46:15.545627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.359 [2024-07-11 02:46:15.545664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.359 [2024-07-11 02:46:15.549889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.359 [2024-07-11 02:46:15.549928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.359 [2024-07-11 02:46:15.549957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.359 [2024-07-11 02:46:15.555502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.359 [2024-07-11 02:46:15.555552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.359 [2024-07-11 02:46:15.555589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.359 [2024-07-11 02:46:15.561326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.561364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.561393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.568464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.568502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.568540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.576468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.576507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.576548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.583306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.583344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.583372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.590181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.590220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.590248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.596608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.596645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.596673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.602791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.602830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.602860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.608437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.608476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.608505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.614152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.614190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.614218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.618748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.618785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.618815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.622039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.622074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.622104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.626396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.626434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.626462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.631392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.631429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.631458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.637692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.637731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.637769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.644213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.644252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.644281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.651577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.651616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.651645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.659612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.659651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.659680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.667577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.667616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.667645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.675517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.675556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.675585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.683458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.683497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.683534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.691281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.691319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.691347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.699308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.699347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.699376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.707266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.707315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.707345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.715217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.715254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.715283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.723139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.723176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.723204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.728459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.728498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.728535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.735583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.735622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.735652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.744088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.744128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.744157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.360 [2024-07-11 02:46:15.751732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.360 [2024-07-11 02:46:15.751771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.360 [2024-07-11 02:46:15.751800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.361 [2024-07-11 02:46:15.760410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.361 [2024-07-11 02:46:15.760450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.361 [2024-07-11 02:46:15.760479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.361 [2024-07-11 02:46:15.767637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.361 [2024-07-11 02:46:15.767677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.361 [2024-07-11 02:46:15.767706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.361 [2024-07-11 02:46:15.773798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.361 [2024-07-11 02:46:15.773837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.361 [2024-07-11 02:46:15.773866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.361 [2024-07-11 02:46:15.778206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.361 [2024-07-11 02:46:15.778246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.361 [2024-07-11 02:46:15.778275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.783235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.783290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.783322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.789607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.789646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.789674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.795668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.795706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.795735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.801896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.801934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.801963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.807403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.807442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.807472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.812788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.812824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.812852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.818264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.818302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.818341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.823699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.823737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.823766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.829225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.829262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.829291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.834781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.834819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.834848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.840271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.840309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.840337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.845918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.845955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.845985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.851373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.851409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.851437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.856772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.856809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.856837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.861792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.861832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.861860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.865487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.865538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.865568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.869650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.869687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.869716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.874184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.874221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.874250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.877393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.877429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.877457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.881857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.620 [2024-07-11 02:46:15.881896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.620 [2024-07-11 02:46:15.881925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.620 [2024-07-11 02:46:15.886559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.886597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.886627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.890001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.890036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.890065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.897042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.897079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.897108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.902038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.902075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.902104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.907637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.907674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.907704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.913389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.913427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.913456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.918758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.918799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.918828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.924191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.924231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.924261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.930415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.930453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.930481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.936432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.936472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.936500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.940224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.940261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.940291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.944638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.944676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.944704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.949812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.949850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.949890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.955774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.955813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.955841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.961690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.961729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.961760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.967183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.967220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.967249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.972704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.972742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.972772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.978227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.978265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.978294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.983756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.983794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.983823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.989245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.989283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.989311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:15.994638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:15.994674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:15.994702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:16.000175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:16.000213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:16.000241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:16.005741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:16.005779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:16.005808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:16.011327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:16.011365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:16.011394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:16.016882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:16.016921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:16.016949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:16.022117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:16.022154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:16.022183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:16.027361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:16.027401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:16.027430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:16.032723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:16.032760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:16.032789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.621 [2024-07-11 02:46:16.038393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.621 [2024-07-11 02:46:16.038430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.621 [2024-07-11 02:46:16.038475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.044149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.044189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.044250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.049945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.049984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.050014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.055595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.055633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.055662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.061149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.061186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.061215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.066703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.066740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.066769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.072107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.072144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.072173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.075358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.075394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.075423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.079932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.079970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.080000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.085317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.085355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.085384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.090816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.090863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.090892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.096314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.096352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.096381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.101858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.101894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.101922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.107374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.107410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.107438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.112932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.881 [2024-07-11 02:46:16.112968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.881 [2024-07-11 02:46:16.112998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.881 [2024-07-11 02:46:16.118488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.118532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.118562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.124099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.124137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.124166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.129583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.129625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.129653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.135178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.135215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.135243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.139705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.139742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.139771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.142870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.142904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.142932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.147310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.147347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.147376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.152160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.152200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.152228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.157234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.157272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.157301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.160455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.160489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.160529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.165217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.165255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.165283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.171186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.171225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.171253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.177352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.177390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.177431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.183025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.183063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.183092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.188982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.189019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.189047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.194647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.194683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.194713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.200617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.200654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.200683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.206687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.206725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.206754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.212870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.212908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.212937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.219028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.219067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.219095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.224911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.224949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.224978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.230550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.230599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.230628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.236317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.236357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.236386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.242761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.242800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.242828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.249167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.249206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.249235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.255295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.255333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.255362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.262057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.262096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.262125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.268242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.268281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.268310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.274392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.882 [2024-07-11 02:46:16.274430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.882 [2024-07-11 02:46:16.274458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.882 [2024-07-11 02:46:16.280226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.883 [2024-07-11 02:46:16.280265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.883 [2024-07-11 02:46:16.280295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.883 [2024-07-11 02:46:16.283615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.883 [2024-07-11 02:46:16.283650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.883 [2024-07-11 02:46:16.283678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.883 [2024-07-11 02:46:16.289478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.883 [2024-07-11 02:46:16.289523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.883 [2024-07-11 02:46:16.289555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.883 [2024-07-11 02:46:16.295390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:25.883 [2024-07-11 02:46:16.295428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.883 [2024-07-11 02:46:16.295457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.301368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.301407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.301436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.308180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.308219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.308248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.313959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.313997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.314026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.319249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.319288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.319317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.324462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.324501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.324539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.330479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.330525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.330566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.336415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.336454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.336483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.340358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.340395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.340424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.345873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.345912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.345941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.351918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.351956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.351985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.357666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.357702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.357731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.363343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.363380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.363409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.369011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.369049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.369079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.374994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.142 [2024-07-11 02:46:16.375032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.142 [2024-07-11 02:46:16.375062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.142 [2024-07-11 02:46:16.380867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.380905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.380933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.386468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.386506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.386544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.391810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.391848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.391876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.396807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.396845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.396874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.402339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.402376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.402405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.407883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.407920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.407948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.413488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.413530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.413560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.418959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.418996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.419024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.424517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.424552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.424593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.430040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.430076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.430105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.435531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.435567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.435595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.440948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.440984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.441012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.446419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.446456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.446484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.452757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.452800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.452828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.459024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.459063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.459092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.464455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.464490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.464527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.468555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.468590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.468619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.473797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.473843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.473873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.480010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.480047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.480076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.487769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.487805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.487835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.495448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.495485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.495524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.503865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.503902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.503930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.509181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.509216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.509245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.515755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.515792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.515820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.523070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.523115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.523145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.531140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.531176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.531204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.539251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.539287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.539318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.545189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.545233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.545261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.550815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.143 [2024-07-11 02:46:16.550852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.143 [2024-07-11 02:46:16.550881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.143 [2024-07-11 02:46:16.556446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.144 [2024-07-11 02:46:16.556482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.144 [2024-07-11 02:46:16.556519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.562139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.562175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.562205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.567873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.567909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.567945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.573538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.573575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.573604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.579164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.579200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.579228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.584794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.584829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.584869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.590304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.590341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.590369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.595750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.595785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.595814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.601159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.601194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.601222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.606561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.606596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.606624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.612045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.612080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.612110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.617650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.617684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.617713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.623165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.623207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.623237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.628705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.628740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.628768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.634280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.634322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.634351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.639826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.639860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.639889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.645766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.645803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.645832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.651619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.651655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.651685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.657353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.657390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.657419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.663287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.663323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.663351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.669434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.403 [2024-07-11 02:46:16.669472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.403 [2024-07-11 02:46:16.669501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.403 [2024-07-11 02:46:16.675561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.675602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.675630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.680706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.680742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.680779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.684163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.684197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.684226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.689629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.689663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.689692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.695102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.695136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.695165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.700545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.700582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.700611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.707034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.707086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.707115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.713201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.713252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.713280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.720184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.720228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.720257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.727740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.727776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.727805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.735669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.735720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.735749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.742465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.742501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.742542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.750444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.750484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.750520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.758034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.758071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.758100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.765978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.766013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.766042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.773921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.773957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.773987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.778003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.778039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.778068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.783334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.783385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.783422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.789899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.789936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.789965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.795858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.795893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.795922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.803253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.803290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.803319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.810847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.810883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.810912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.404 [2024-07-11 02:46:16.815110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.404 [2024-07-11 02:46:16.815145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.404 [2024-07-11 02:46:16.815173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.663 [2024-07-11 02:46:16.823498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.663 [2024-07-11 02:46:16.823559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.663 [2024-07-11 02:46:16.823590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.663 [2024-07-11 02:46:16.831496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.663 [2024-07-11 02:46:16.831545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.663 [2024-07-11 02:46:16.831575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.839387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.839424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.839453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.845649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.845685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.845714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.851435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.851479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.851527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.857635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.857671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.857700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.863846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.863887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.863916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.870893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.870936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.870965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.879204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.879241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.879271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.886310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.886348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.886376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.890896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.890940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.890969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.898747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.898784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.898814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.906553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.906588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.906617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.915110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.915154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.915184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.922456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.922492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.922529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.930750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.930789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.930818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.938621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.938664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.938692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.946006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.946043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.946072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.664 [2024-07-11 02:46:16.953646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0afb0) 00:40:26.664 [2024-07-11 02:46:16.953682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.664 [2024-07-11 02:46:16.953713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.664 00:40:26.664 Latency(us) 00:40:26.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.664 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:40:26.664 nvme0n1 : 2.00 5384.04 673.00 0.00 0.00 2966.76 813.13 9514.86 00:40:26.664 =================================================================================================================== 00:40:26.664 Total : 5384.04 673.00 0.00 0.00 2966.76 813.13 9514.86 00:40:26.664 0 00:40:26.664 02:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:26.664 02:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:26.664 02:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:26.664 02:46:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:26.664 | .driver_specific 00:40:26.664 | .nvme_error 00:40:26.664 | .status_code 00:40:26.664 | .command_transient_transport_error' 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 347 > 0 )) 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1975263 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1975263 ']' 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1975263 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1975263 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1975263' 00:40:26.923 killing process with pid 1975263 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1975263 00:40:26.923 Received shutdown signal, test time was about 2.000000 seconds 00:40:26.923 00:40:26.923 Latency(us) 00:40:26.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.923 =================================================================================================================== 00:40:26.923 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:26.923 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1975263 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1975568 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1975568 /var/tmp/bperf.sock 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1975568 ']' 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:27.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:27.181 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:27.181 [2024-07-11 02:46:17.506229] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:27.181 [2024-07-11 02:46:17.506320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975568 ] 00:40:27.181 EAL: No free 2048 kB hugepages reported on node 1 00:40:27.181 [2024-07-11 02:46:17.565144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.439 [2024-07-11 02:46:17.652737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:27.439 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:27.439 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:40:27.439 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:27.439 02:46:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:27.698 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:27.698 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:27.698 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:27.698 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:27.698 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:27.698 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:28.264 nvme0n1 00:40:28.264 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:40:28.264 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:28.264 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:28.264 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:28.264 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:28.264 02:46:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:28.264 Running I/O for 2 seconds... 00:40:28.264 [2024-07-11 02:46:18.674534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190df988 00:40:28.264 [2024-07-11 02:46:18.676177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.264 [2024-07-11 02:46:18.676218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.688863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eea00 00:40:28.523 [2024-07-11 02:46:18.689943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.689977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.703267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f9f68 00:40:28.523 [2024-07-11 02:46:18.704746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.704780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.718779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190efae0 00:40:28.523 [2024-07-11 02:46:18.720950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.720982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.728771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190dece0 00:40:28.523 [2024-07-11 02:46:18.729648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.729681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.744696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f6890 00:40:28.523 [2024-07-11 02:46:18.746275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.746308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.759330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190de470 00:40:28.523 [2024-07-11 02:46:18.761108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.761141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.774021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e6300 00:40:28.523 [2024-07-11 02:46:18.775981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.776014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.788674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e1b48 00:40:28.523 [2024-07-11 02:46:18.790830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.790864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.798629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190efae0 00:40:28.523 [2024-07-11 02:46:18.799484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.799526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.814491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ff3c8 00:40:28.523 [2024-07-11 02:46:18.816084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.816116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.829141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ef270 00:40:28.523 [2024-07-11 02:46:18.830928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.830962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.843850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f7100 00:40:28.523 [2024-07-11 02:46:18.845803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.845836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.858507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e8d30 00:40:28.523 [2024-07-11 02:46:18.860677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.860711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.873171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e12d8 00:40:28.523 [2024-07-11 02:46:18.875513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.875546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.883184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f81e0 00:40:28.523 [2024-07-11 02:46:18.884220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.884252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.897391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f35f0 00:40:28.523 [2024-07-11 02:46:18.898674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.898707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.912084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fdeb0 00:40:28.523 [2024-07-11 02:46:18.913538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.913570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.926722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f5be8 00:40:28.523 [2024-07-11 02:46:18.928372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.928405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:28.523 [2024-07-11 02:46:18.941378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ed920 00:40:28.523 [2024-07-11 02:46:18.943229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.523 [2024-07-11 02:46:18.943261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:18.955536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f5378 00:40:28.781 [2024-07-11 02:46:18.957104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:18.957136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:18.968605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ff3c8 00:40:28.781 [2024-07-11 02:46:18.970086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:18.970124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:18.983055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f20d8 00:40:28.781 [2024-07-11 02:46:18.984538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:18.984571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:18.996800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f5378 00:40:28.781 [2024-07-11 02:46:18.998461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:18.998494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.011453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e6300 00:40:28.781 [2024-07-11 02:46:19.013301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.013334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.026123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e12d8 00:40:28.781 [2024-07-11 02:46:19.028171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.028203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.040818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190efae0 00:40:28.781 [2024-07-11 02:46:19.043048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.043080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.050767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f20d8 00:40:28.781 [2024-07-11 02:46:19.051681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.051713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.066577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e88f8 00:40:28.781 [2024-07-11 02:46:19.068224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.068257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.081214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e49b0 00:40:28.781 [2024-07-11 02:46:19.083042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.083075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.095880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fc998 00:40:28.781 [2024-07-11 02:46:19.097914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.097946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.110500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e6fa8 00:40:28.781 [2024-07-11 02:46:19.112744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.112776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.120480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f9b30 00:40:28.781 [2024-07-11 02:46:19.121395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.121427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.136301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e23b8 00:40:28.781 [2024-07-11 02:46:19.137934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.137967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.150924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e1b48 00:40:28.781 [2024-07-11 02:46:19.152879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.152912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.165705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ee5c8 00:40:28.781 [2024-07-11 02:46:19.167723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.167755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.180328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fbcf0 00:40:28.781 [2024-07-11 02:46:19.182537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.781 [2024-07-11 02:46:19.182570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:40:28.781 [2024-07-11 02:46:19.190623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190de8a8 00:40:28.781 [2024-07-11 02:46:19.191711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:28.782 [2024-07-11 02:46:19.191744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.206355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f5378 00:40:29.040 [2024-07-11 02:46:19.207578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.207610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.220353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f6458 00:40:29.040 [2024-07-11 02:46:19.221740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.221772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.235835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fc998 00:40:29.040 [2024-07-11 02:46:19.237896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.237928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.250445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190efae0 00:40:29.040 [2024-07-11 02:46:19.252709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.252741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.260374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e4de8 00:40:29.040 [2024-07-11 02:46:19.261325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.261357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.276216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eaab8 00:40:29.040 [2024-07-11 02:46:19.277916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.277947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.290852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fbcf0 00:40:29.040 [2024-07-11 02:46:19.292716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.292749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.305561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fef90 00:40:29.040 [2024-07-11 02:46:19.307640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.307672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.320205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f0350 00:40:29.040 [2024-07-11 02:46:19.322448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.322480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.330137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f1868 00:40:29.040 [2024-07-11 02:46:19.331070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.040 [2024-07-11 02:46:19.331107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:40:29.040 [2024-07-11 02:46:19.344266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f0788 00:40:29.040 [2024-07-11 02:46:19.345195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.041 [2024-07-11 02:46:19.345228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:40:29.041 [2024-07-11 02:46:19.358754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fef90 00:40:29.041 [2024-07-11 02:46:19.359870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.041 [2024-07-11 02:46:19.359902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:29.041 [2024-07-11 02:46:19.373535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f81e0 00:40:29.041 [2024-07-11 02:46:19.374852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.041 [2024-07-11 02:46:19.374885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:40:29.041 [2024-07-11 02:46:19.387698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f2510 00:40:29.041 [2024-07-11 02:46:19.388993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.041 [2024-07-11 02:46:19.389026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:29.041 [2024-07-11 02:46:19.403797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e5220 00:40:29.041 [2024-07-11 02:46:19.405837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.041 [2024-07-11 02:46:19.405870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:29.041 [2024-07-11 02:46:19.415610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f7970 00:40:29.041 [2024-07-11 02:46:19.416681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.041 [2024-07-11 02:46:19.416714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:40:29.041 [2024-07-11 02:46:19.429804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e49b0 00:40:29.041 [2024-07-11 02:46:19.431121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.041 [2024-07-11 02:46:19.431154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:40:29.041 [2024-07-11 02:46:19.443710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e38d0 00:40:29.041 [2024-07-11 02:46:19.445014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.041 [2024-07-11 02:46:19.445047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:40:29.041 [2024-07-11 02:46:19.456564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e12d8 00:40:29.041 [2024-07-11 02:46:19.457853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.041 [2024-07-11 02:46:19.457886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:29.299 [2024-07-11 02:46:19.471354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f31b8 00:40:29.299 [2024-07-11 02:46:19.472824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.299 [2024-07-11 02:46:19.472857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:40:29.299 [2024-07-11 02:46:19.486132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e0a68 00:40:29.299 [2024-07-11 02:46:19.487758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.299 [2024-07-11 02:46:19.487791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:40:29.299 [2024-07-11 02:46:19.500846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e1f80 00:40:29.299 [2024-07-11 02:46:19.502694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.299 [2024-07-11 02:46:19.502727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:40:29.299 [2024-07-11 02:46:19.514969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fb480 00:40:29.299 [2024-07-11 02:46:19.516771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.299 [2024-07-11 02:46:19.516804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:40:29.299 [2024-07-11 02:46:19.529751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190de038 00:40:29.299 [2024-07-11 02:46:19.531759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.299 [2024-07-11 02:46:19.531791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.544529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190dece0 00:40:29.300 [2024-07-11 02:46:19.546717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.546750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.554519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ed0b0 00:40:29.300 [2024-07-11 02:46:19.555384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.555416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.568681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190df118 00:40:29.300 [2024-07-11 02:46:19.569542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.569575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.581711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ec840 00:40:29.300 [2024-07-11 02:46:19.582546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.582586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.596556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e9168 00:40:29.300 [2024-07-11 02:46:19.597606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.597647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.611301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fac10 00:40:29.300 [2024-07-11 02:46:19.612552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.612590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.626088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f8618 00:40:29.300 [2024-07-11 02:46:19.627500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.627540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.640931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ea248 00:40:29.300 [2024-07-11 02:46:19.642531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.642564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.655706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e8088 00:40:29.300 [2024-07-11 02:46:19.657508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.657546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.669631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eb760 00:40:29.300 [2024-07-11 02:46:19.671106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.671138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.682526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190df988 00:40:29.300 [2024-07-11 02:46:19.683940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.683974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.697230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190de038 00:40:29.300 [2024-07-11 02:46:19.698839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.698879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:40:29.300 [2024-07-11 02:46:19.711995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e4de8 00:40:29.300 [2024-07-11 02:46:19.713800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.300 [2024-07-11 02:46:19.713833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.726737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e9168 00:40:29.558 [2024-07-11 02:46:19.728754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.728787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.741503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e73e0 00:40:29.558 [2024-07-11 02:46:19.743695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.743728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.751508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f8618 00:40:29.558 [2024-07-11 02:46:19.752379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.752411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.765690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eb760 00:40:29.558 [2024-07-11 02:46:19.766547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.766580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.781362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eb328 00:40:29.558 [2024-07-11 02:46:19.783410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.783444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.793442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f92c0 00:40:29.558 [2024-07-11 02:46:19.794466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.794498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.808141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e5658 00:40:29.558 [2024-07-11 02:46:19.809361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.809393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.822870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fda78 00:40:29.558 [2024-07-11 02:46:19.824281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.824314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.837614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fe2e8 00:40:29.558 [2024-07-11 02:46:19.839198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.839230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:29.558 [2024-07-11 02:46:19.852347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eb760 00:40:29.558 [2024-07-11 02:46:19.854127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.558 [2024-07-11 02:46:19.854159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:40:29.559 [2024-07-11 02:46:19.867044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e6fa8 00:40:29.559 [2024-07-11 02:46:19.869018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.559 [2024-07-11 02:46:19.869050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:40:29.559 [2024-07-11 02:46:19.881769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f57b0 00:40:29.559 [2024-07-11 02:46:19.883925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.559 [2024-07-11 02:46:19.883957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:40:29.559 [2024-07-11 02:46:19.891810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f8e88 00:40:29.559 [2024-07-11 02:46:19.892659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.559 [2024-07-11 02:46:19.892692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:29.559 [2024-07-11 02:46:19.910565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ddc00 00:40:29.559 [2024-07-11 02:46:19.912957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.559 [2024-07-11 02:46:19.912990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:29.559 [2024-07-11 02:46:19.920807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fa3a0 00:40:29.559 [2024-07-11 02:46:19.922026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.559 [2024-07-11 02:46:19.922059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:29.559 [2024-07-11 02:46:19.935562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f9b30 00:40:29.559 [2024-07-11 02:46:19.936969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.559 [2024-07-11 02:46:19.937002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:40:29.559 [2024-07-11 02:46:19.950320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eb328 00:40:29.559 [2024-07-11 02:46:19.951926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.559 [2024-07-11 02:46:19.951959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:40:29.559 [2024-07-11 02:46:19.967312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eb328 00:40:29.559 [2024-07-11 02:46:19.969677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.559 [2024-07-11 02:46:19.969710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:29.559 [2024-07-11 02:46:19.977364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ea248 00:40:29.559 [2024-07-11 02:46:19.978388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.559 [2024-07-11 02:46:19.978421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:29.817 [2024-07-11 02:46:19.990732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e7818 00:40:29.817 [2024-07-11 02:46:19.991758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.817 [2024-07-11 02:46:19.991790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:40:29.817 [2024-07-11 02:46:20.005602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fc998 00:40:29.817 [2024-07-11 02:46:20.006831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.817 [2024-07-11 02:46:20.006865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:29.817 [2024-07-11 02:46:20.020474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190feb58 00:40:29.817 [2024-07-11 02:46:20.021936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.817 [2024-07-11 02:46:20.021972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:40:29.817 [2024-07-11 02:46:20.035406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fac10 00:40:29.817 [2024-07-11 02:46:20.037047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.817 [2024-07-11 02:46:20.037085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:29.817 [2024-07-11 02:46:20.050347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f8618 00:40:29.818 [2024-07-11 02:46:20.052148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.052183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.065185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eaef0 00:40:29.818 [2024-07-11 02:46:20.067159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.067201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.079951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f6890 00:40:29.818 [2024-07-11 02:46:20.082100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.082134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.094690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f7da8 00:40:29.818 [2024-07-11 02:46:20.097034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.097067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.104709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e8d30 00:40:29.818 [2024-07-11 02:46:20.105746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.105779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.118884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fd640 00:40:29.818 [2024-07-11 02:46:20.119896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.119929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.133110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e49b0 00:40:29.818 [2024-07-11 02:46:20.134145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.134178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.145945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f9b30 00:40:29.818 [2024-07-11 02:46:20.146928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.146961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.160665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f6890 00:40:29.818 [2024-07-11 02:46:20.161846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.161878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.175411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f2948 00:40:29.818 [2024-07-11 02:46:20.176863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.176896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.190228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fd640 00:40:29.818 [2024-07-11 02:46:20.191793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.191826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.204971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ec408 00:40:29.818 [2024-07-11 02:46:20.206718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.206751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.219694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f6890 00:40:29.818 [2024-07-11 02:46:20.221634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.221666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:40:29.818 [2024-07-11 02:46:20.234426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fd208 00:40:29.818 [2024-07-11 02:46:20.236551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.818 [2024-07-11 02:46:20.236583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.249141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e8d30 00:40:30.076 [2024-07-11 02:46:20.251458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.251490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.259139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fbcf0 00:40:30.076 [2024-07-11 02:46:20.260133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.260165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.272749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e73e0 00:40:30.076 [2024-07-11 02:46:20.273766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.273798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.287258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e5ec8 00:40:30.076 [2024-07-11 02:46:20.288252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.288285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.301332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190eff18 00:40:30.076 [2024-07-11 02:46:20.302310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.302343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.314331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fcdd0 00:40:30.076 [2024-07-11 02:46:20.315303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.315335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.329083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ebb98 00:40:30.076 [2024-07-11 02:46:20.330242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.330274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.343976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.076 [2024-07-11 02:46:20.345328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.345360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.358720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e84c0 00:40:30.076 [2024-07-11 02:46:20.360254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.360286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.373432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f3a28 00:40:30.076 [2024-07-11 02:46:20.375154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.375186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.388114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190ebb98 00:40:30.076 [2024-07-11 02:46:20.390042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.076 [2024-07-11 02:46:20.390074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:40:30.076 [2024-07-11 02:46:20.402764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190fd208 00:40:30.077 [2024-07-11 02:46:20.404857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.077 [2024-07-11 02:46:20.404889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:40:30.077 [2024-07-11 02:46:20.417425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e2c28 00:40:30.077 [2024-07-11 02:46:20.419715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.077 [2024-07-11 02:46:20.419748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:40:30.077 [2024-07-11 02:46:20.427395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190e1b48 00:40:30.077 [2024-07-11 02:46:20.428377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.077 [2024-07-11 02:46:20.428415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:40:30.077 [2024-07-11 02:46:20.443064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.077 [2024-07-11 02:46:20.443319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.077 [2024-07-11 02:46:20.443351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.077 [2024-07-11 02:46:20.458424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.077 [2024-07-11 02:46:20.458689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.077 [2024-07-11 02:46:20.458722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.077 [2024-07-11 02:46:20.473780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.077 [2024-07-11 02:46:20.474040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.077 [2024-07-11 02:46:20.474072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.077 [2024-07-11 02:46:20.489111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.077 [2024-07-11 02:46:20.489367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.077 [2024-07-11 02:46:20.489398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.335 [2024-07-11 02:46:20.504439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.335 [2024-07-11 02:46:20.504705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.335 [2024-07-11 02:46:20.504738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.335 [2024-07-11 02:46:20.519824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.520079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.520111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 [2024-07-11 02:46:20.535134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.535387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.535420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 [2024-07-11 02:46:20.550518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.550776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.550808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 [2024-07-11 02:46:20.565889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.566149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.566181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 [2024-07-11 02:46:20.581203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.581457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.581488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 [2024-07-11 02:46:20.596587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.596839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.596871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 [2024-07-11 02:46:20.611914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.612174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.612206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 [2024-07-11 02:46:20.627262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.627523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.627555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 [2024-07-11 02:46:20.642580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.642835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.642866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 [2024-07-11 02:46:20.657861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23f41c0) with pdu=0x2000190f96f8 00:40:30.336 [2024-07-11 02:46:20.658116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.336 [2024-07-11 02:46:20.658148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:30.336 00:40:30.336 Latency(us) 00:40:30.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:30.336 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:30.336 nvme0n1 : 2.01 18010.37 70.35 0.00 0.00 7088.83 3034.07 18544.26 00:40:30.336 =================================================================================================================== 00:40:30.336 Total : 18010.37 70.35 0.00 0.00 7088.83 3034.07 18544.26 00:40:30.336 0 00:40:30.336 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:30.336 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:30.336 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:30.336 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:30.336 | .driver_specific 00:40:30.336 | .nvme_error 00:40:30.336 | .status_code 00:40:30.336 | .command_transient_transport_error' 00:40:30.595 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:40:30.595 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1975568 00:40:30.595 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1975568 ']' 00:40:30.595 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1975568 00:40:30.595 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:40:30.595 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:30.595 02:46:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1975568 00:40:30.595 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:30.595 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:30.595 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1975568' 00:40:30.595 killing process with pid 1975568 00:40:30.595 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1975568 00:40:30.595 Received shutdown signal, test time was about 2.000000 seconds 00:40:30.595 00:40:30.595 Latency(us) 00:40:30.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:30.595 =================================================================================================================== 00:40:30.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:30.595 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1975568 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1975966 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1975966 /var/tmp/bperf.sock 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1975966 ']' 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:30.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:30.854 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:30.854 [2024-07-11 02:46:21.220207] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:30.854 [2024-07-11 02:46:21.220304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975966 ] 00:40:30.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:30.854 Zero copy mechanism will not be used. 00:40:30.854 EAL: No free 2048 kB hugepages reported on node 1 00:40:31.112 [2024-07-11 02:46:21.280828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:31.112 [2024-07-11 02:46:21.371153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:31.112 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:31.112 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:40:31.112 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:31.112 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:31.370 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:31.370 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.370 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:31.370 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.370 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:31.370 02:46:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:31.936 nvme0n1 00:40:31.936 02:46:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:40:31.936 02:46:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:31.936 02:46:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:31.936 02:46:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:31.936 02:46:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:31.936 02:46:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:32.196 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:32.196 Zero copy mechanism will not be used. 00:40:32.196 Running I/O for 2 seconds... 00:40:32.196 [2024-07-11 02:46:22.378383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.378819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.378869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.384030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.384357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.384393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.389475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.389822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.389855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.394828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.395152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.395187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.400087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.400414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.400447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.405989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.406313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.406346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.412062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.412387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.412421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.418339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.418672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.418705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.424500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.424835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.424868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.429534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.429845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.429878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.434467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.434774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.434808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.439266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.439562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.439602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.443967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.444231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.444264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.448683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.448951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.448990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.453341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.196 [2024-07-11 02:46:22.453611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.196 [2024-07-11 02:46:22.453648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.196 [2024-07-11 02:46:22.458022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.458283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.458320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.462561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.462827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.462863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.467706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.467963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.468001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.472779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.473037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.473071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.478083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.478349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.478382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.483205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.483469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.483502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.488246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.488507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.488549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.493494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.493761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.493799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.498582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.498851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.498884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.503670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.503925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.503958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.508920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.509184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.509217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.513928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.514190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.514223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.519074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.519330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.519363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.524690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.524952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.524990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.529731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.529991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.530029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.534974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.535238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.535271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.540179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.540443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.540476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.545282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.545551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.545584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.550425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.550698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.550730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.555610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.555871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.555904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.560937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.561195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.561229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.566234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.566506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.566548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.571290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.571567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.571601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.576532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.576796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.576829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.581591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.581849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.581899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.586751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.587014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.587057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.591701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.591958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.591997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.596739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.597011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.597045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.601915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.197 [2024-07-11 02:46:22.602179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.197 [2024-07-11 02:46:22.602216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.197 [2024-07-11 02:46:22.607138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.198 [2024-07-11 02:46:22.607400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.198 [2024-07-11 02:46:22.607438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.198 [2024-07-11 02:46:22.612324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.198 [2024-07-11 02:46:22.612597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.198 [2024-07-11 02:46:22.612634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.617592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.617851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.617885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.622831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.623097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.623130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.628055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.628322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.628356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.633233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.633498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.633540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.638640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.638912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.638950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.643987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.644252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.644285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.649254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.649525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.649558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.654435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.654697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.654731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.659819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.660073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.660117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.665147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.665412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.665445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.670170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.670434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.670467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.675325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.675594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.675627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.680588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.680852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.680885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.685844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.686108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.686148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.691002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.691266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.691300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.696410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.696685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.696719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.701597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.701860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.701892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.706668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.706949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.706983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.711847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.712113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.712145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.717004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.717259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.717297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.722742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.723114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.723147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.728209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.728473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.728506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.733676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.733932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.733969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.739024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.739283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.739316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.744492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.744764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.744800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.749875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.458 [2024-07-11 02:46:22.750216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.458 [2024-07-11 02:46:22.750249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.458 [2024-07-11 02:46:22.755437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.755815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.755852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.760895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.761275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.761314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.766437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.766733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.766770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.771950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.772209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.772242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.777180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.777543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.777578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.782749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.783006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.783039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.788208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.788474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.788507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.793808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.794180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.794216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.799218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.799523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.799569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.804760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.805103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.805136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.810217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.810567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.810600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.815673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.816035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.816067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.821335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.821626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.821660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.826974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.827329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.827361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.832611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.832914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.832946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.838178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.838508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.838546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.843824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.844108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.844140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.849326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.849693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.849726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.854803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.855129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.855162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.860170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.860471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.860504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.865703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.866052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.866085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.871279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.871662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.871695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.459 [2024-07-11 02:46:22.876671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.459 [2024-07-11 02:46:22.876990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.459 [2024-07-11 02:46:22.877024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.882238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.882597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.882630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.887777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.888085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.888122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.893392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.893752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.893786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.898986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.899239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.899277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.904499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.904813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.904846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.909966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.910290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.910322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.915375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.915741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.915775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.920939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.921206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.921239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.926463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.926781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.926814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.931903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.932273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.932310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.937367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.937710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.937743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.942772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.943051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.943090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.948260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.948531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.948563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.953859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.954220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.954253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.959318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.959689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.959722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.964909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.965300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.965340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.970362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.970638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.970672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.975983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.976339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.976372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.981549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.981904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.981937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.986909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.987259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.987291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.992293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.992592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.992625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:22.997823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:22.998168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:22.998201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:23.003267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:23.003642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:23.003675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:23.008653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:23.008963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:23.008997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:23.014057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:23.014316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:23.014349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:23.019557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.719 [2024-07-11 02:46:23.019891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.719 [2024-07-11 02:46:23.019925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.719 [2024-07-11 02:46:23.025077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.025376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.025409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.030581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.030939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.030971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.036174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.036568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.036607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.041774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.042118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.042152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.047335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.047607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.047641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.052823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.053077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.053110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.058200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.058554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.058587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.063794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.064110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.064143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.069219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.069580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.069614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.074871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.075184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.075216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.080283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.080624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.080658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.085948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.086226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.086259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.091401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.091684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.091717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.096793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.097112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.097145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.102691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.102982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.103015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.108092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.108428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.108461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.114209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.114487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.114528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.120744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.121017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.121050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.127170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.127445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.127477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.132709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.132958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.132994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.720 [2024-07-11 02:46:23.138047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.720 [2024-07-11 02:46:23.138314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.720 [2024-07-11 02:46:23.138352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.142821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.143097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.143132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.147526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.147796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.147832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.152238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.152495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.152537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.157141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.157397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.157430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.162281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.162555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.162588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.167468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.167736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.167770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.172442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.172703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.172737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.177699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.177954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.177996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.182807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.183061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.183094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.188153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.188408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.188443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.193499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.193773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.193810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.199209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.199497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.199537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.204278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.204506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.204547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.209417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.209666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.209701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.214603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.214840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.214874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.219836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.220084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.220118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.224809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.980 [2024-07-11 02:46:23.225075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.980 [2024-07-11 02:46:23.225108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.980 [2024-07-11 02:46:23.230296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.230545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.230578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.235495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.235762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.235795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.240616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.240858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.240892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.246145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.246401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.246435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.251414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.251671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.251705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.256527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.256772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.256805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.261579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.261835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.261867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.266873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.267100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.267133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.271920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.272144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.272177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.277184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.277431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.277464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.282361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.282596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.282630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.287453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.287696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.287730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.292473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.292731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.292764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.297610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.297848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.297880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.302619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.302862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.302896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.307834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.308090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.308123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.312812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.313059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.313098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.318021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.318262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.318295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.323045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.323284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.323321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.328059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.328300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.328333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.333132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.333384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.333420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.338135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.338365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.338399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.343250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.343476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.343519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.348351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.348595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.348629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.353607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.353851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.353885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.358717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.358962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.358995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.363687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.363927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.363961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.368848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.369090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.369125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.373915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.981 [2024-07-11 02:46:23.374145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.981 [2024-07-11 02:46:23.374179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.981 [2024-07-11 02:46:23.379041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.982 [2024-07-11 02:46:23.379278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.982 [2024-07-11 02:46:23.379317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.982 [2024-07-11 02:46:23.384120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.982 [2024-07-11 02:46:23.384362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.982 [2024-07-11 02:46:23.384396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:32.982 [2024-07-11 02:46:23.389245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.982 [2024-07-11 02:46:23.389490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.982 [2024-07-11 02:46:23.389535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:32.982 [2024-07-11 02:46:23.394566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:32.982 [2024-07-11 02:46:23.394807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:32.982 [2024-07-11 02:46:23.394842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:32.982 [2024-07-11 02:46:23.399775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.400021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.400062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.404863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.405096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.405130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.409976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.410205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.410240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.415025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.415275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.415308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.420043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.420267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.420310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.425170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.425429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.425463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.430265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.430519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.430553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.435326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.435554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.435587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.440550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.440791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.440824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.445602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.445840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.445893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.450730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.450981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.241 [2024-07-11 02:46:23.451014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.241 [2024-07-11 02:46:23.455952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.241 [2024-07-11 02:46:23.456187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.456219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.461140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.461372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.461405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.466329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.466574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.466607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.471468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.471737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.471771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.476727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.476966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.477000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.482054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.482295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.482332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.487147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.487387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.487420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.492166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.492392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.492424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.497241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.497484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.497525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.502312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.502552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.502585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.507346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.507580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.507614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.512525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.512780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.512814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.517720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.517977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.518010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.522779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.523000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.523034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.527954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.528208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.528241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.533160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.533390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.533429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.538342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.538598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.538632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.543419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.543652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.543695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.548584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.548821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.548854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.553809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.554066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.554099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.558958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.559210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.559243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.564060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.564295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.564328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.569121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.569356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.569390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.574237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.574467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.574501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.579427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.579693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.579726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.584498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.584763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.584796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.589786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.590020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.590058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.594822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.595057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.595090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.599903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.600143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.242 [2024-07-11 02:46:23.600176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.242 [2024-07-11 02:46:23.605119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.242 [2024-07-11 02:46:23.605368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.605401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.610336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.610575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.610609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.615330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.615581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.615615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.620521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.620762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.620795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.625892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.626150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.626184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.631145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.631371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.631404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.636381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.636623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.636658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.641300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.641570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.641604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.646543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.646788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.646823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.651608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.651843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.651877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.243 [2024-07-11 02:46:23.656737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.243 [2024-07-11 02:46:23.656957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.243 [2024-07-11 02:46:23.656991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.662048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.662305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.662340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.667141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.667398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.667439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.672169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.672409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.672442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.677247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.677471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.677504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.682367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.682616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.682650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.687552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.687793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.687826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.692678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.692920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.692953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.697805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.698048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.698081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.702771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.702997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.703031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.707773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.708002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.708034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.713073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.713329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.713362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.718159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.718382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.718415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.723285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.723524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.723558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.728560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.728790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.728823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.733730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.733967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.502 [2024-07-11 02:46:23.734001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.502 [2024-07-11 02:46:23.738625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.502 [2024-07-11 02:46:23.738880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.738913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.743889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.744123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.744161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.749038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.749284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.749317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.754124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.754370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.754403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.759122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.759353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.759386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.764220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.764449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.764482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.769538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.769773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.769805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.774752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.774985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.775019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.779904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.780169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.780202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.784966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.785197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.785230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.789921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.790176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.790208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.794950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.795177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.795211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.800201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.800430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.800469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.805260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.805518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.805551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.810362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.810617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.810653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.815519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.815755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.815788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.820774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.821026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.821063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.825879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.826121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.826153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.831227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.831461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.831494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.836289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.836522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.836555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.841476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.841730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.841764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.846480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.846714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.846748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.851544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.851788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.851821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.856747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.856993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.857025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.861909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.862133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.862166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.867023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.867265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.867298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.872088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.872316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.872349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.877151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.877381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.877414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.882190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.882436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.503 [2024-07-11 02:46:23.882469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.503 [2024-07-11 02:46:23.887301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.503 [2024-07-11 02:46:23.887548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.504 [2024-07-11 02:46:23.887587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.504 [2024-07-11 02:46:23.892483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.504 [2024-07-11 02:46:23.892745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.504 [2024-07-11 02:46:23.892779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.504 [2024-07-11 02:46:23.897687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.504 [2024-07-11 02:46:23.897940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.504 [2024-07-11 02:46:23.897973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.504 [2024-07-11 02:46:23.902875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.504 [2024-07-11 02:46:23.903123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.504 [2024-07-11 02:46:23.903156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.504 [2024-07-11 02:46:23.907983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.504 [2024-07-11 02:46:23.908213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.504 [2024-07-11 02:46:23.908247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.504 [2024-07-11 02:46:23.913241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.504 [2024-07-11 02:46:23.913476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.504 [2024-07-11 02:46:23.913516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.504 [2024-07-11 02:46:23.918259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.504 [2024-07-11 02:46:23.918526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.504 [2024-07-11 02:46:23.918561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.923371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.923620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.923654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.928595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.928826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.928859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.934161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.934409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.934442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.939548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.939819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.939852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.945063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.945406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.945439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.950378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.950657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.950690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.955868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.956240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.956273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.961065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.961354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.961386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.966598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.966832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.966865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.972158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.972406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.972439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.977799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.763 [2024-07-11 02:46:23.978000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.763 [2024-07-11 02:46:23.978033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.763 [2024-07-11 02:46:23.983021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:23.983238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:23.983271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:23.988419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:23.988659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:23.988692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:23.994009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:23.994260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:23.994293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:23.999536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:23.999757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:23.999791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.005081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.005319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.005352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.010681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.010936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.010969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.015925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.016129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.016162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.021450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.021688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.021727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.026777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.027028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.027067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.032025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.032252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.032285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.037104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.037268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.037301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.042633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.042891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.042924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.047837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.048118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.048150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.053204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.053463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.053496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.058664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.058900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.058932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.064114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.064359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.064391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.069718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.069976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.070008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.075325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.075576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.075609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.080673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.080907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.080940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.086112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.086359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.086396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.091550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.091828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.091862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.097174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.097416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.097449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.102528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.102770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.102803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.107851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.108087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.108120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.113403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.113669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.113702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.119139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.119381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.119413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.124489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.124716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.124752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.129807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.130058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.130091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.135095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.135300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.764 [2024-07-11 02:46:24.135334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.764 [2024-07-11 02:46:24.140536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.764 [2024-07-11 02:46:24.140762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.765 [2024-07-11 02:46:24.140796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.765 [2024-07-11 02:46:24.146105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.765 [2024-07-11 02:46:24.146349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.765 [2024-07-11 02:46:24.146383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.765 [2024-07-11 02:46:24.151762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.765 [2024-07-11 02:46:24.152030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.765 [2024-07-11 02:46:24.152063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.765 [2024-07-11 02:46:24.157227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.765 [2024-07-11 02:46:24.157490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.765 [2024-07-11 02:46:24.157530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:33.765 [2024-07-11 02:46:24.162414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.765 [2024-07-11 02:46:24.162700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.765 [2024-07-11 02:46:24.162734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:33.765 [2024-07-11 02:46:24.167833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.765 [2024-07-11 02:46:24.168033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.765 [2024-07-11 02:46:24.168073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:33.765 [2024-07-11 02:46:24.173246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.765 [2024-07-11 02:46:24.173502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.765 [2024-07-11 02:46:24.173542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:33.765 [2024-07-11 02:46:24.178726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:33.765 [2024-07-11 02:46:24.178957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:33.765 [2024-07-11 02:46:24.178990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.184189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.184415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.184449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.189660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.189896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.189930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.195168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.195372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.195411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.200550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.200801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.200834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.205914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.206148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.206181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.211373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.211609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.211650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.216727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.216963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.216997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.222254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.222550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.222591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.227471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.227729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.227763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.232683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.232937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.232969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.238271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.238555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.238596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.243641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.243867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.243900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.248982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.249194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.249227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.254411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.254652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.254685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.259673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.259873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.259912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.265095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.265325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.265358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.270453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.270703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.270736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.276048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.276257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.276289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.281475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.281714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.281748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.287003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.287300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.287333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.292496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.292748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.292781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.297978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.298248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.298281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.303428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.303681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.303714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.308860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.025 [2024-07-11 02:46:24.309135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.025 [2024-07-11 02:46:24.309167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.025 [2024-07-11 02:46:24.314258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.314501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.314544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.319715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.319950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.319983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.325021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.325276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.325310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.330460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.330724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.330758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.335792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.336070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.336104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.341236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.341534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.341568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.346870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.347131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.347164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.352301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.352599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.352636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.357790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.358039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.358075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.363148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.363370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.363402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.368338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.368605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.368639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.026 [2024-07-11 02:46:24.373677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x25bb320) with pdu=0x2000190fef90 00:40:34.026 [2024-07-11 02:46:24.373911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.026 [2024-07-11 02:46:24.373945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.026 00:40:34.026 Latency(us) 00:40:34.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.026 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:40:34.026 nvme0n1 : 2.00 5866.04 733.25 0.00 0.00 2719.50 2099.58 7087.60 00:40:34.026 =================================================================================================================== 00:40:34.026 Total : 5866.04 733.25 0.00 0.00 2719.50 2099.58 7087.60 00:40:34.026 0 00:40:34.026 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:34.026 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:34.026 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:34.026 | .driver_specific 00:40:34.026 | .nvme_error 00:40:34.026 | .status_code 00:40:34.026 | .command_transient_transport_error' 00:40:34.026 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:34.284 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 378 > 0 )) 00:40:34.284 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1975966 00:40:34.284 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1975966 ']' 00:40:34.284 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1975966 00:40:34.284 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:40:34.284 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:34.284 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1975966 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1975966' 00:40:34.543 killing process with pid 1975966 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1975966 00:40:34.543 Received shutdown signal, test time was about 2.000000 seconds 00:40:34.543 00:40:34.543 Latency(us) 00:40:34.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.543 =================================================================================================================== 00:40:34.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1975966 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1974831 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1974831 ']' 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1974831 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1974831 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1974831' 00:40:34.543 killing process with pid 1974831 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1974831 00:40:34.543 02:46:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1974831 00:40:34.803 00:40:34.803 real 0m15.213s 00:40:34.803 user 0m30.510s 00:40:34.803 sys 0m4.209s 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:34.803 ************************************ 00:40:34.803 END TEST nvmf_digest_error 00:40:34.803 ************************************ 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:34.803 rmmod nvme_tcp 00:40:34.803 rmmod nvme_fabrics 00:40:34.803 rmmod nvme_keyring 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1974831 ']' 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1974831 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1974831 ']' 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1974831 00:40:34.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1974831) - No such process 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1974831 is not found' 00:40:34.803 Process with pid 1974831 is not found 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:34.803 02:46:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.337 02:46:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:37.337 00:40:37.337 real 0m34.835s 00:40:37.337 user 1m2.755s 00:40:37.337 sys 0m9.757s 00:40:37.337 02:46:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:37.337 02:46:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:40:37.337 ************************************ 00:40:37.337 END TEST nvmf_digest 00:40:37.337 ************************************ 00:40:37.337 02:46:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:40:37.337 02:46:27 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:40:37.337 02:46:27 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:40:37.337 02:46:27 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:40:37.337 02:46:27 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:40:37.337 02:46:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:37.337 02:46:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:37.337 02:46:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:37.337 ************************************ 00:40:37.337 START TEST nvmf_bdevperf 00:40:37.337 ************************************ 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:40:37.337 * Looking for test storage... 00:40:37.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:37.337 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:40:37.338 02:46:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:38.718 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:40:38.719 Found 0000:08:00.0 (0x8086 - 0x159b) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:40:38.719 Found 0000:08:00.1 (0x8086 - 0x159b) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:40:38.719 Found net devices under 0000:08:00.0: cvl_0_0 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:40:38.719 Found net devices under 0000:08:00.1: cvl_0_1 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:38.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:38.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:40:38.719 00:40:38.719 --- 10.0.0.2 ping statistics --- 00:40:38.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.719 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:38.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:38.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:40:38.719 00:40:38.719 --- 10.0.0.1 ping statistics --- 00:40:38.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:38.719 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:38.719 02:46:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1977781 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1977781 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1977781 ']' 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:38.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:38.719 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:38.719 [2024-07-11 02:46:29.060234] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:38.719 [2024-07-11 02:46:29.060331] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:38.719 EAL: No free 2048 kB hugepages reported on node 1 00:40:38.719 [2024-07-11 02:46:29.125066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:38.977 [2024-07-11 02:46:29.213138] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:38.977 [2024-07-11 02:46:29.213196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:38.977 [2024-07-11 02:46:29.213219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:38.977 [2024-07-11 02:46:29.213245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:38.977 [2024-07-11 02:46:29.213263] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:38.977 [2024-07-11 02:46:29.213361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:38.977 [2024-07-11 02:46:29.213408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:38.977 [2024-07-11 02:46:29.213414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:38.977 [2024-07-11 02:46:29.336911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:38.977 Malloc0 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:38.977 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:39.235 [2024-07-11 02:46:29.402311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:39.235 { 00:40:39.235 "params": { 00:40:39.235 "name": "Nvme$subsystem", 00:40:39.235 "trtype": "$TEST_TRANSPORT", 00:40:39.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:39.235 "adrfam": "ipv4", 00:40:39.235 "trsvcid": "$NVMF_PORT", 00:40:39.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:39.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:39.235 "hdgst": ${hdgst:-false}, 00:40:39.235 "ddgst": ${ddgst:-false} 00:40:39.235 }, 00:40:39.235 "method": "bdev_nvme_attach_controller" 00:40:39.235 } 00:40:39.235 EOF 00:40:39.235 )") 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:40:39.235 02:46:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:39.235 "params": { 00:40:39.235 "name": "Nvme1", 00:40:39.235 "trtype": "tcp", 00:40:39.235 "traddr": "10.0.0.2", 00:40:39.235 "adrfam": "ipv4", 00:40:39.235 "trsvcid": "4420", 00:40:39.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:39.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:39.235 "hdgst": false, 00:40:39.235 "ddgst": false 00:40:39.235 }, 00:40:39.235 "method": "bdev_nvme_attach_controller" 00:40:39.235 }' 00:40:39.235 [2024-07-11 02:46:29.451345] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:39.235 [2024-07-11 02:46:29.451435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977809 ] 00:40:39.235 EAL: No free 2048 kB hugepages reported on node 1 00:40:39.235 [2024-07-11 02:46:29.511372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.235 [2024-07-11 02:46:29.598882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.494 Running I/O for 1 seconds... 00:40:40.468 00:40:40.468 Latency(us) 00:40:40.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:40.468 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:40.468 Verification LBA range: start 0x0 length 0x4000 00:40:40.468 Nvme1n1 : 1.01 7468.18 29.17 0.00 0.00 17017.65 2560.76 19903.53 00:40:40.468 =================================================================================================================== 00:40:40.468 Total : 7468.18 29.17 0.00 0.00 17017.65 2560.76 19903.53 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1977944 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:40.727 { 00:40:40.727 "params": { 00:40:40.727 "name": "Nvme$subsystem", 00:40:40.727 "trtype": "$TEST_TRANSPORT", 00:40:40.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:40.727 "adrfam": "ipv4", 00:40:40.727 "trsvcid": "$NVMF_PORT", 00:40:40.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:40.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:40.727 "hdgst": ${hdgst:-false}, 00:40:40.727 "ddgst": ${ddgst:-false} 00:40:40.727 }, 00:40:40.727 "method": "bdev_nvme_attach_controller" 00:40:40.727 } 00:40:40.727 EOF 00:40:40.727 )") 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:40:40.727 02:46:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:40.727 "params": { 00:40:40.727 "name": "Nvme1", 00:40:40.727 "trtype": "tcp", 00:40:40.727 "traddr": "10.0.0.2", 00:40:40.727 "adrfam": "ipv4", 00:40:40.727 "trsvcid": "4420", 00:40:40.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:40.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:40.727 "hdgst": false, 00:40:40.727 "ddgst": false 00:40:40.727 }, 00:40:40.727 "method": "bdev_nvme_attach_controller" 00:40:40.727 }' 00:40:40.727 [2024-07-11 02:46:31.025134] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:40.727 [2024-07-11 02:46:31.025236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977944 ] 00:40:40.727 EAL: No free 2048 kB hugepages reported on node 1 00:40:40.727 [2024-07-11 02:46:31.085451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.985 [2024-07-11 02:46:31.172539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.985 Running I/O for 15 seconds... 00:40:44.263 02:46:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1977781 00:40:44.263 02:46:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:40:44.263 [2024-07-11 02:46:33.995678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.995728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.995761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.995783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.995803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.995822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.995841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.995861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.995881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.995899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.995919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.995936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.995955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.995972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.995991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.263 [2024-07-11 02:46:33.996767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.263 [2024-07-11 02:46:33.996783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.996801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.996817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.996835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.996851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.996869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.996886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.996904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.996920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.996938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.996954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.996972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.996989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.997967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.997989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.998005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.998023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.998039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.998056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.998073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.998090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.998107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.998124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.998141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.998159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.998175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.998193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.998209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.264 [2024-07-11 02:46:33.998226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.264 [2024-07-11 02:46:33.998243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.998979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.998997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.265 [2024-07-11 02:46:33.999887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:44.265 [2024-07-11 02:46:33.999922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.265 [2024-07-11 02:46:33.999944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:44.266 [2024-07-11 02:46:33.999961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:33.999978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:44.266 [2024-07-11 02:46:33.999994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:44.266 [2024-07-11 02:46:34.000029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:44.266 [2024-07-11 02:46:34.000063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:44.266 [2024-07-11 02:46:34.000096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:44.266 [2024-07-11 02:46:34.000131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.266 [2024-07-11 02:46:34.000165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.266 [2024-07-11 02:46:34.000199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.266 [2024-07-11 02:46:34.000233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.266 [2024-07-11 02:46:34.000267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.266 [2024-07-11 02:46:34.000302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.266 [2024-07-11 02:46:34.000336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.266 [2024-07-11 02:46:34.000370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.266 [2024-07-11 02:46:34.000408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:44.266 [2024-07-11 02:46:34.000442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2bc0 is same with the state(5) to be set 00:40:44.266 [2024-07-11 02:46:34.000478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:44.266 [2024-07-11 02:46:34.000492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:44.266 [2024-07-11 02:46:34.000505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24176 len:8 PRP1 0x0 PRP2 0x0 00:40:44.266 [2024-07-11 02:46:34.000528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:44.266 [2024-07-11 02:46:34.000595] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ae2bc0 was disconnected and freed. reset controller. 00:40:44.266 [2024-07-11 02:46:34.004688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.266 [2024-07-11 02:46:34.004760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.266 [2024-07-11 02:46:34.005500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.266 [2024-07-11 02:46:34.005565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.266 [2024-07-11 02:46:34.005584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.266 [2024-07-11 02:46:34.005862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.266 [2024-07-11 02:46:34.006129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.266 [2024-07-11 02:46:34.006151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.266 [2024-07-11 02:46:34.006169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.266 [2024-07-11 02:46:34.010211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.266 [2024-07-11 02:46:34.019563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.266 [2024-07-11 02:46:34.020082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.266 [2024-07-11 02:46:34.020124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.266 [2024-07-11 02:46:34.020143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.266 [2024-07-11 02:46:34.020413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.266 [2024-07-11 02:46:34.020689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.266 [2024-07-11 02:46:34.020712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.266 [2024-07-11 02:46:34.020728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.266 [2024-07-11 02:46:34.024766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.266 [2024-07-11 02:46:34.034036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.266 [2024-07-11 02:46:34.034599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.266 [2024-07-11 02:46:34.034641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.266 [2024-07-11 02:46:34.034661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.266 [2024-07-11 02:46:34.034931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.266 [2024-07-11 02:46:34.035198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.266 [2024-07-11 02:46:34.035220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.266 [2024-07-11 02:46:34.035236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.266 [2024-07-11 02:46:34.039297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.266 [2024-07-11 02:46:34.048612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.266 [2024-07-11 02:46:34.049037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.266 [2024-07-11 02:46:34.049078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.266 [2024-07-11 02:46:34.049097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.266 [2024-07-11 02:46:34.049375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.266 [2024-07-11 02:46:34.049661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.266 [2024-07-11 02:46:34.049684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.266 [2024-07-11 02:46:34.049700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.266 [2024-07-11 02:46:34.053747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.266 [2024-07-11 02:46:34.063113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.266 [2024-07-11 02:46:34.063623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.266 [2024-07-11 02:46:34.063657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.266 [2024-07-11 02:46:34.063675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.266 [2024-07-11 02:46:34.063940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.266 [2024-07-11 02:46:34.064213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.266 [2024-07-11 02:46:34.064234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.266 [2024-07-11 02:46:34.064250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.266 [2024-07-11 02:46:34.068311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.266 [2024-07-11 02:46:34.077592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.266 [2024-07-11 02:46:34.078117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.266 [2024-07-11 02:46:34.078158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.266 [2024-07-11 02:46:34.078183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.266 [2024-07-11 02:46:34.078454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.266 [2024-07-11 02:46:34.078732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.266 [2024-07-11 02:46:34.078755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.266 [2024-07-11 02:46:34.078771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.266 [2024-07-11 02:46:34.082854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.266 [2024-07-11 02:46:34.091994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.266 [2024-07-11 02:46:34.092465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.267 [2024-07-11 02:46:34.092495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.267 [2024-07-11 02:46:34.092522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.267 [2024-07-11 02:46:34.092789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.267 [2024-07-11 02:46:34.093054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.267 [2024-07-11 02:46:34.093075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.267 [2024-07-11 02:46:34.093092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.267 [2024-07-11 02:46:34.097165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.267 [2024-07-11 02:46:34.106447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.267 [2024-07-11 02:46:34.106846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.267 [2024-07-11 02:46:34.106876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.267 [2024-07-11 02:46:34.106894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.267 [2024-07-11 02:46:34.107157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.267 [2024-07-11 02:46:34.107429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.267 [2024-07-11 02:46:34.107450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.267 [2024-07-11 02:46:34.107466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.267 [2024-07-11 02:46:34.111535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.267 [2024-07-11 02:46:34.120918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.267 [2024-07-11 02:46:34.121428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.267 [2024-07-11 02:46:34.121469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.267 [2024-07-11 02:46:34.121488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.267 [2024-07-11 02:46:34.121769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.267 [2024-07-11 02:46:34.122038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.267 [2024-07-11 02:46:34.122065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.267 [2024-07-11 02:46:34.122082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.267 [2024-07-11 02:46:34.126156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.267 [2024-07-11 02:46:34.135326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.267 [2024-07-11 02:46:34.135840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.267 [2024-07-11 02:46:34.135933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.267 [2024-07-11 02:46:34.135953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.267 [2024-07-11 02:46:34.136223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.267 [2024-07-11 02:46:34.136496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.267 [2024-07-11 02:46:34.136532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.267 [2024-07-11 02:46:34.136549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.267 [2024-07-11 02:46:34.140607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.267 [2024-07-11 02:46:34.149794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.267 [2024-07-11 02:46:34.150296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.267 [2024-07-11 02:46:34.150345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.267 [2024-07-11 02:46:34.150363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.267 [2024-07-11 02:46:34.150638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.267 [2024-07-11 02:46:34.150904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.267 [2024-07-11 02:46:34.150926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.267 [2024-07-11 02:46:34.150942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.267 [2024-07-11 02:46:34.155004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.267 [2024-07-11 02:46:34.164285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.267 [2024-07-11 02:46:34.164841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.267 [2024-07-11 02:46:34.164882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.267 [2024-07-11 02:46:34.164902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.267 [2024-07-11 02:46:34.165178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.267 [2024-07-11 02:46:34.165445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.267 [2024-07-11 02:46:34.165467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.267 [2024-07-11 02:46:34.165483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.267 [2024-07-11 02:46:34.169528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.267 [2024-07-11 02:46:34.178919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.267 [2024-07-11 02:46:34.179427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.267 [2024-07-11 02:46:34.179478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.267 [2024-07-11 02:46:34.179496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.267 [2024-07-11 02:46:34.179767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.267 [2024-07-11 02:46:34.180032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.267 [2024-07-11 02:46:34.180054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.267 [2024-07-11 02:46:34.180070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.267 [2024-07-11 02:46:34.184190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.267 [2024-07-11 02:46:34.193321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.267 [2024-07-11 02:46:34.193855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.267 [2024-07-11 02:46:34.193897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.267 [2024-07-11 02:46:34.193916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.267 [2024-07-11 02:46:34.194186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.267 [2024-07-11 02:46:34.194452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.267 [2024-07-11 02:46:34.194474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.267 [2024-07-11 02:46:34.194490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.267 [2024-07-11 02:46:34.198585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.267 [2024-07-11 02:46:34.207720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.267 [2024-07-11 02:46:34.208212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.267 [2024-07-11 02:46:34.208243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.267 [2024-07-11 02:46:34.208261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.267 [2024-07-11 02:46:34.208535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.267 [2024-07-11 02:46:34.208807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.267 [2024-07-11 02:46:34.208829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.267 [2024-07-11 02:46:34.208845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.267 [2024-07-11 02:46:34.212934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.267 [2024-07-11 02:46:34.222311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.267 [2024-07-11 02:46:34.222842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.222884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.222904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.223180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.223447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.223469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.223485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.227524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.236799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.237322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.237363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.237383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.237670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.237938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.237960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.237976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.242003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.251274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.251782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.251837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.251857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.252126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.252394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.252415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.252431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.256490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.265767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.266255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.266306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.266323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.266596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.266863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.266885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.266907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.270940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.280246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.280762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.280805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.280824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.281100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.281367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.281389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.281406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.285462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.294755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.295239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.295280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.295300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.295596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.295865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.295887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.295903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.299947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.309244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.309717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.309768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.309786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.310049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.310316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.310338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.310354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.314414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.323717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.324181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.324237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.324255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.324527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.324794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.324815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.324831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.328848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.338158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.338557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.338590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.338608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.338872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.339137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.339159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.339175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.343273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.352561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.353060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.353090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.353107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.353370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.353643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.353666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.353682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.357720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.367039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.367476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.367526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.268 [2024-07-11 02:46:34.367545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.268 [2024-07-11 02:46:34.367808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.268 [2024-07-11 02:46:34.368080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.268 [2024-07-11 02:46:34.368102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.268 [2024-07-11 02:46:34.368117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.268 [2024-07-11 02:46:34.372149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.268 [2024-07-11 02:46:34.381538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.268 [2024-07-11 02:46:34.382088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.268 [2024-07-11 02:46:34.382130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.382150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.382420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.382699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.382722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.382738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.386805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.395880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.396416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.396457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.396476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.396758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.397037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.397060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.397082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.401150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.410461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.410890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.410932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.410951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.411227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.411506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.411539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.411556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.415593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.424862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.425330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.425382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.425402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.425682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.425949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.425973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.425989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.430011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.439290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.439800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.439842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.439861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.440131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.440399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.440420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.440436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.444468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.453754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.454292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.454334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.454353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.454635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.454903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.454925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.454941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.458971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.468276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.468653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.468684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.468708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.468972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.469238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.469260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.469276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.473307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.482654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.483071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.483112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.483132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.483401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.483686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.483709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.483725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.487769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.497107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.497603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.497644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.497664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.497934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.498209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.498230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.498246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.502294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.511583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.512051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.512082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.512099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.512363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.512643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.512667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.512683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.516714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.525999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.526448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.269 [2024-07-11 02:46:34.526478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.269 [2024-07-11 02:46:34.526495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.269 [2024-07-11 02:46:34.526770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.269 [2024-07-11 02:46:34.527042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.269 [2024-07-11 02:46:34.527064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.269 [2024-07-11 02:46:34.527080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.269 [2024-07-11 02:46:34.531142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.269 [2024-07-11 02:46:34.540450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.269 [2024-07-11 02:46:34.540951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.541004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.541024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.541294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.541573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.541596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.541612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.545660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.270 [2024-07-11 02:46:34.554957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.270 [2024-07-11 02:46:34.555460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.555500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.555531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.555809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.556075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.556097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.556113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.560154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.270 [2024-07-11 02:46:34.569481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.270 [2024-07-11 02:46:34.569983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.570025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.570044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.570314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.570594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.570617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.570633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.574666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.270 [2024-07-11 02:46:34.583957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.270 [2024-07-11 02:46:34.584485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.584535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.584556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.584826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.585099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.585121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.585137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.589168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.270 [2024-07-11 02:46:34.598484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.270 [2024-07-11 02:46:34.598992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.599042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.599060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.599323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.599599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.599621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.599637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.603663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.270 [2024-07-11 02:46:34.612951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.270 [2024-07-11 02:46:34.613403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.613435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.613459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.613733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.613999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.614021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.614036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.618051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.270 [2024-07-11 02:46:34.627339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.270 [2024-07-11 02:46:34.627734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.627764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.627781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.628045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.628310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.628331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.628347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.632399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.270 [2024-07-11 02:46:34.641721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.270 [2024-07-11 02:46:34.642166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.642221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.642239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.642502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.642778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.642799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.642815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.646842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.270 [2024-07-11 02:46:34.656162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.270 [2024-07-11 02:46:34.656666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.656696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.656714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.656983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.657248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.657277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.657294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.661321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.270 [2024-07-11 02:46:34.670629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.270 [2024-07-11 02:46:34.671137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.270 [2024-07-11 02:46:34.671193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.270 [2024-07-11 02:46:34.671212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.270 [2024-07-11 02:46:34.671482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.270 [2024-07-11 02:46:34.671760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.270 [2024-07-11 02:46:34.671783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.270 [2024-07-11 02:46:34.671799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.270 [2024-07-11 02:46:34.675825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.530 [2024-07-11 02:46:34.685077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.530 [2024-07-11 02:46:34.685522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.530 [2024-07-11 02:46:34.685554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.530 [2024-07-11 02:46:34.685575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.530 [2024-07-11 02:46:34.685839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.530 [2024-07-11 02:46:34.686105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.530 [2024-07-11 02:46:34.686127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.530 [2024-07-11 02:46:34.686143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.530 [2024-07-11 02:46:34.690244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.530 [2024-07-11 02:46:34.699480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.530 [2024-07-11 02:46:34.699962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.530 [2024-07-11 02:46:34.700012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.530 [2024-07-11 02:46:34.700030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.530 [2024-07-11 02:46:34.700293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.530 [2024-07-11 02:46:34.700568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.530 [2024-07-11 02:46:34.700591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.530 [2024-07-11 02:46:34.700607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.530 [2024-07-11 02:46:34.704633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.530 [2024-07-11 02:46:34.713863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.714229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.714259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.714277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.714552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.714820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.714841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.714858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.718879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.728418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.728941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.728984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.729003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.729274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.729553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.729575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.729591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.733637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.742896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.743318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.743359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.743378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.743661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.743929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.743951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.743966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.748006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.757321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.757811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.757853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.757872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.758148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.758416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.758438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.758454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.762476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.771735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.772207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.772238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.772256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.772529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.772795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.772817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.772833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.776866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.786132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.786502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.786539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.786557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.786826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.787092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.787114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.787130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.791156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.800665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.801165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.801206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.801226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.801496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.801773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.801796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.801819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.805864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.815146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.815623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.815665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.815684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.815954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.816227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.816249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.816265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.820323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.829646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.830171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.830213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.830232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.830508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.830798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.830821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.830837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.834897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.844189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.844695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.844737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.531 [2024-07-11 02:46:34.844756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.531 [2024-07-11 02:46:34.845026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.531 [2024-07-11 02:46:34.845292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.531 [2024-07-11 02:46:34.845314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.531 [2024-07-11 02:46:34.845331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.531 [2024-07-11 02:46:34.849379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.531 [2024-07-11 02:46:34.858706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.531 [2024-07-11 02:46:34.859229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.531 [2024-07-11 02:46:34.859277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.532 [2024-07-11 02:46:34.859297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.532 [2024-07-11 02:46:34.859586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.532 [2024-07-11 02:46:34.859854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.532 [2024-07-11 02:46:34.859876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.532 [2024-07-11 02:46:34.859892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.532 [2024-07-11 02:46:34.863933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.532 [2024-07-11 02:46:34.873025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.532 [2024-07-11 02:46:34.873607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.532 [2024-07-11 02:46:34.873649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.532 [2024-07-11 02:46:34.873668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.532 [2024-07-11 02:46:34.873938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.532 [2024-07-11 02:46:34.874205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.532 [2024-07-11 02:46:34.874227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.532 [2024-07-11 02:46:34.874243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.532 [2024-07-11 02:46:34.878277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.532 [2024-07-11 02:46:34.887578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.532 [2024-07-11 02:46:34.888056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.532 [2024-07-11 02:46:34.888113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.532 [2024-07-11 02:46:34.888132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.532 [2024-07-11 02:46:34.888402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.532 [2024-07-11 02:46:34.888687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.532 [2024-07-11 02:46:34.888710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.532 [2024-07-11 02:46:34.888727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.532 [2024-07-11 02:46:34.892777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.532 [2024-07-11 02:46:34.901921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.532 [2024-07-11 02:46:34.902406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.532 [2024-07-11 02:46:34.902458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.532 [2024-07-11 02:46:34.902476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.532 [2024-07-11 02:46:34.902749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.532 [2024-07-11 02:46:34.903021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.532 [2024-07-11 02:46:34.903044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.532 [2024-07-11 02:46:34.903060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.532 [2024-07-11 02:46:34.907134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.532 [2024-07-11 02:46:34.916446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.532 [2024-07-11 02:46:34.916935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.532 [2024-07-11 02:46:34.916976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.532 [2024-07-11 02:46:34.916995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.532 [2024-07-11 02:46:34.917271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.532 [2024-07-11 02:46:34.917557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.532 [2024-07-11 02:46:34.917580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.532 [2024-07-11 02:46:34.917597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.532 [2024-07-11 02:46:34.921659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.532 [2024-07-11 02:46:34.930995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.532 [2024-07-11 02:46:34.931485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.532 [2024-07-11 02:46:34.931523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.532 [2024-07-11 02:46:34.931542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.532 [2024-07-11 02:46:34.931806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.532 [2024-07-11 02:46:34.932072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.532 [2024-07-11 02:46:34.932094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.532 [2024-07-11 02:46:34.932109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.532 [2024-07-11 02:46:34.936144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.532 [2024-07-11 02:46:34.945473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.532 [2024-07-11 02:46:34.945956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.532 [2024-07-11 02:46:34.946006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.532 [2024-07-11 02:46:34.946024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.532 [2024-07-11 02:46:34.946287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.532 [2024-07-11 02:46:34.946563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.532 [2024-07-11 02:46:34.946585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.532 [2024-07-11 02:46:34.946601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.791 [2024-07-11 02:46:34.950636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.791 [2024-07-11 02:46:34.959962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.791 [2024-07-11 02:46:34.960482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.791 [2024-07-11 02:46:34.960533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.791 [2024-07-11 02:46:34.960554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.791 [2024-07-11 02:46:34.960824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.791 [2024-07-11 02:46:34.961091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.791 [2024-07-11 02:46:34.961113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.791 [2024-07-11 02:46:34.961129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.791 [2024-07-11 02:46:34.965170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.791 [2024-07-11 02:46:34.974500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.791 [2024-07-11 02:46:34.974983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.791 [2024-07-11 02:46:34.975023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.791 [2024-07-11 02:46:34.975042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.791 [2024-07-11 02:46:34.975313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.791 [2024-07-11 02:46:34.975592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.791 [2024-07-11 02:46:34.975615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.791 [2024-07-11 02:46:34.975631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.791 [2024-07-11 02:46:34.979733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.791 [2024-07-11 02:46:34.989076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.791 [2024-07-11 02:46:34.989598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.791 [2024-07-11 02:46:34.989640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.791 [2024-07-11 02:46:34.989659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.791 [2024-07-11 02:46:34.989929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.791 [2024-07-11 02:46:34.990196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.791 [2024-07-11 02:46:34.990218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.791 [2024-07-11 02:46:34.990233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.791 [2024-07-11 02:46:34.994281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.791 [2024-07-11 02:46:35.003575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.791 [2024-07-11 02:46:35.004070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.791 [2024-07-11 02:46:35.004110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.791 [2024-07-11 02:46:35.004135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.791 [2024-07-11 02:46:35.004406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.791 [2024-07-11 02:46:35.004706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.791 [2024-07-11 02:46:35.004730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.791 [2024-07-11 02:46:35.004746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.791 [2024-07-11 02:46:35.008787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.791 [2024-07-11 02:46:35.018052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.791 [2024-07-11 02:46:35.018557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.791 [2024-07-11 02:46:35.018629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.791 [2024-07-11 02:46:35.018649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.791 [2024-07-11 02:46:35.018919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.791 [2024-07-11 02:46:35.019189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.791 [2024-07-11 02:46:35.019212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.791 [2024-07-11 02:46:35.019228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.791 [2024-07-11 02:46:35.023401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.791 [2024-07-11 02:46:35.032476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.791 [2024-07-11 02:46:35.033014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.791 [2024-07-11 02:46:35.033056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.033076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.033345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.033628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.033652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.033669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.037713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.047039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.047548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.047591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.047611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.047882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.048149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.048178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.048195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.052270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.061567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.062103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.062146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.062165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.062436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.062717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.062741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.062758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.066788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.075896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.076372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.076425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.076443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.076729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.076998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.077020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.077036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.081089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.090371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.090835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.090865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.090883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.091146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.091412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.091436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.091451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.095478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.104873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.105310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.105360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.105378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.105651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.105919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.105941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.105957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.109985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.119304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.119809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.119858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.119876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.120146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.120412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.120434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.120450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.124491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.133788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.134257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.134287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.134304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.134577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.134844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.134867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.134883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.138912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.148200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.148685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.148728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.148753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.149025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.149293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.149316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.149332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.153505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.162583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.163116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.163158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.163178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.163448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.163729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.163752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.163769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.167803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.177072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.177539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.792 [2024-07-11 02:46:35.177577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.792 [2024-07-11 02:46:35.177596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.792 [2024-07-11 02:46:35.177859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.792 [2024-07-11 02:46:35.178126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.792 [2024-07-11 02:46:35.178148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.792 [2024-07-11 02:46:35.178167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.792 [2024-07-11 02:46:35.182186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.792 [2024-07-11 02:46:35.191480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.792 [2024-07-11 02:46:35.191950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.793 [2024-07-11 02:46:35.192001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.793 [2024-07-11 02:46:35.192019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.793 [2024-07-11 02:46:35.192288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.793 [2024-07-11 02:46:35.192564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.793 [2024-07-11 02:46:35.192593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.793 [2024-07-11 02:46:35.192610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:44.793 [2024-07-11 02:46:35.196668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:44.793 [2024-07-11 02:46:35.205961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:44.793 [2024-07-11 02:46:35.206547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:44.793 [2024-07-11 02:46:35.206591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:44.793 [2024-07-11 02:46:35.206610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:44.793 [2024-07-11 02:46:35.206893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:44.793 [2024-07-11 02:46:35.207162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:44.793 [2024-07-11 02:46:35.207185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:44.793 [2024-07-11 02:46:35.207203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.051 [2024-07-11 02:46:35.211236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.051 [2024-07-11 02:46:35.220305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.051 [2024-07-11 02:46:35.220768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.051 [2024-07-11 02:46:35.220810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.051 [2024-07-11 02:46:35.220830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.051 [2024-07-11 02:46:35.221106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.051 [2024-07-11 02:46:35.221374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.051 [2024-07-11 02:46:35.221396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.051 [2024-07-11 02:46:35.221414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.051 [2024-07-11 02:46:35.225476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.051 [2024-07-11 02:46:35.234808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.051 [2024-07-11 02:46:35.235204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.051 [2024-07-11 02:46:35.235236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.051 [2024-07-11 02:46:35.235254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.051 [2024-07-11 02:46:35.235528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.051 [2024-07-11 02:46:35.235795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.051 [2024-07-11 02:46:35.235817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.051 [2024-07-11 02:46:35.235834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.051 [2024-07-11 02:46:35.239902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.051 [2024-07-11 02:46:35.249198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.051 [2024-07-11 02:46:35.249728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.051 [2024-07-11 02:46:35.249785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.051 [2024-07-11 02:46:35.249805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.051 [2024-07-11 02:46:35.250076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.051 [2024-07-11 02:46:35.250345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.051 [2024-07-11 02:46:35.250367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.051 [2024-07-11 02:46:35.250383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.051 [2024-07-11 02:46:35.254435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.051 [2024-07-11 02:46:35.263739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.051 [2024-07-11 02:46:35.264225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.051 [2024-07-11 02:46:35.264267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.264287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.264571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.264840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.264863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.264878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.268907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.278284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.278780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.278836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.278856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.279126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.279392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.279416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.279433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.283493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.292858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.293305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.293336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.293354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.293634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.293901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.293925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.293940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.298006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.307402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.307943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.307986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.308006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.308283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.308571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.308595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.308612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.312702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.321876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.322389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.322432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.322451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.322736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.323010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.323033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.323050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.327092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.336411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.336829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.336863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.336881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.337145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.337412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.337436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.337458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.341491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.350789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.351178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.351209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.351227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.351490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.351766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.351790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.351806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.355850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.365122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.365525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.365556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.365576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.365839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.366104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.366127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.366143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.370187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.379470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.380004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.380050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.380068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.380338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.380624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.380647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.380663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.384690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.394020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.394504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.394578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.394596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.394860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.395126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.395149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.395165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.399211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.408487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.408978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.409028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.052 [2024-07-11 02:46:35.409046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.052 [2024-07-11 02:46:35.409318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.052 [2024-07-11 02:46:35.409594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.052 [2024-07-11 02:46:35.409618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.052 [2024-07-11 02:46:35.409634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.052 [2024-07-11 02:46:35.413677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.052 [2024-07-11 02:46:35.422947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.052 [2024-07-11 02:46:35.423426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.052 [2024-07-11 02:46:35.423475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.053 [2024-07-11 02:46:35.423492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.053 [2024-07-11 02:46:35.423772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.053 [2024-07-11 02:46:35.424039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.053 [2024-07-11 02:46:35.424067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.053 [2024-07-11 02:46:35.424083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.053 [2024-07-11 02:46:35.428125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.053 [2024-07-11 02:46:35.437412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.053 [2024-07-11 02:46:35.437896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.053 [2024-07-11 02:46:35.437926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.053 [2024-07-11 02:46:35.437944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.053 [2024-07-11 02:46:35.438213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.053 [2024-07-11 02:46:35.438485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.053 [2024-07-11 02:46:35.438518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.053 [2024-07-11 02:46:35.438535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.053 [2024-07-11 02:46:35.442582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.053 [2024-07-11 02:46:35.451931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.053 [2024-07-11 02:46:35.452414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.053 [2024-07-11 02:46:35.452444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.053 [2024-07-11 02:46:35.452462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.053 [2024-07-11 02:46:35.452734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.053 [2024-07-11 02:46:35.453001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.053 [2024-07-11 02:46:35.453024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.053 [2024-07-11 02:46:35.453039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.053 [2024-07-11 02:46:35.457106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.053 [2024-07-11 02:46:35.466544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.053 [2024-07-11 02:46:35.467013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.053 [2024-07-11 02:46:35.467081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.053 [2024-07-11 02:46:35.467099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.053 [2024-07-11 02:46:35.467362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.053 [2024-07-11 02:46:35.467642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.053 [2024-07-11 02:46:35.467666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.053 [2024-07-11 02:46:35.467682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.471726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.481041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.481524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.481555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.481572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.481835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.482100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.482123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.482139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.486204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.495607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.496129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.496159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.496176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.496439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.496714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.496738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.496754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.500826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.510145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.510604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.510656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.510674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.510943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.511209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.511231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.511247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.515291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.524692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.525186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.525242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.525261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.525544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.525812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.525835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.525851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.529916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.539252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.539767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.539810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.539835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.540106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.540373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.540396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.540412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.544489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.553642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.554164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.554215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.554240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.554529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.554796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.554820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.554836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.558918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.568027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.568533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.568592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.568612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.568890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.569163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.569186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.569202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.573274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.582389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.582876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.582918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.582938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.583209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.583476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.583507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.583538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.587611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.596750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.597199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.597230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.597249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.597523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.597790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.597813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.597829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.601873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.611200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.611672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.611703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.611721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.611985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.612251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.312 [2024-07-11 02:46:35.612273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.312 [2024-07-11 02:46:35.612289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.312 [2024-07-11 02:46:35.616357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.312 [2024-07-11 02:46:35.625753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.312 [2024-07-11 02:46:35.626149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.312 [2024-07-11 02:46:35.626180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.312 [2024-07-11 02:46:35.626198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.312 [2024-07-11 02:46:35.626461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.312 [2024-07-11 02:46:35.626738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.313 [2024-07-11 02:46:35.626762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.313 [2024-07-11 02:46:35.626778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.313 [2024-07-11 02:46:35.630838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.313 [2024-07-11 02:46:35.640156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.313 [2024-07-11 02:46:35.640595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.313 [2024-07-11 02:46:35.640646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.313 [2024-07-11 02:46:35.640664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.313 [2024-07-11 02:46:35.640927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.313 [2024-07-11 02:46:35.641194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.313 [2024-07-11 02:46:35.641217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.313 [2024-07-11 02:46:35.641233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.313 [2024-07-11 02:46:35.645298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.313 [2024-07-11 02:46:35.654706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.313 [2024-07-11 02:46:35.655183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.313 [2024-07-11 02:46:35.655214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.313 [2024-07-11 02:46:35.655231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.313 [2024-07-11 02:46:35.655494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.313 [2024-07-11 02:46:35.655770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.313 [2024-07-11 02:46:35.655793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.313 [2024-07-11 02:46:35.655809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.313 [2024-07-11 02:46:35.659890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.313 [2024-07-11 02:46:35.669236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.313 [2024-07-11 02:46:35.669785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.313 [2024-07-11 02:46:35.669828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.313 [2024-07-11 02:46:35.669848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.313 [2024-07-11 02:46:35.670125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.313 [2024-07-11 02:46:35.670391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.313 [2024-07-11 02:46:35.670414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.313 [2024-07-11 02:46:35.670431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.313 [2024-07-11 02:46:35.674517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.313 [2024-07-11 02:46:35.683673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.313 [2024-07-11 02:46:35.684183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.313 [2024-07-11 02:46:35.684226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.313 [2024-07-11 02:46:35.684246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.313 [2024-07-11 02:46:35.684537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.313 [2024-07-11 02:46:35.684805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.313 [2024-07-11 02:46:35.684828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.313 [2024-07-11 02:46:35.684844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.313 [2024-07-11 02:46:35.688937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.313 [2024-07-11 02:46:35.698056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.313 [2024-07-11 02:46:35.698622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.313 [2024-07-11 02:46:35.698666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.313 [2024-07-11 02:46:35.698685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.313 [2024-07-11 02:46:35.698956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.313 [2024-07-11 02:46:35.699223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.313 [2024-07-11 02:46:35.699246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.313 [2024-07-11 02:46:35.699262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.313 [2024-07-11 02:46:35.703326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.313 [2024-07-11 02:46:35.712686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.313 [2024-07-11 02:46:35.713200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.313 [2024-07-11 02:46:35.713244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.313 [2024-07-11 02:46:35.713263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.313 [2024-07-11 02:46:35.713549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.313 [2024-07-11 02:46:35.713817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.313 [2024-07-11 02:46:35.713840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.313 [2024-07-11 02:46:35.713856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.313 [2024-07-11 02:46:35.717924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.313 [2024-07-11 02:46:35.727055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.313 [2024-07-11 02:46:35.727478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.313 [2024-07-11 02:46:35.727600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.313 [2024-07-11 02:46:35.727622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.313 [2024-07-11 02:46:35.727887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.313 [2024-07-11 02:46:35.728153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.313 [2024-07-11 02:46:35.728176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.313 [2024-07-11 02:46:35.728198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.572 [2024-07-11 02:46:35.732245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.572 [2024-07-11 02:46:35.741573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.572 [2024-07-11 02:46:35.742051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.572 [2024-07-11 02:46:35.742099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.572 [2024-07-11 02:46:35.742118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.572 [2024-07-11 02:46:35.742381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.572 [2024-07-11 02:46:35.742666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.572 [2024-07-11 02:46:35.742689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.572 [2024-07-11 02:46:35.742706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.572 [2024-07-11 02:46:35.746790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.572 [2024-07-11 02:46:35.755927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.572 [2024-07-11 02:46:35.756417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.572 [2024-07-11 02:46:35.756448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.572 [2024-07-11 02:46:35.756466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.572 [2024-07-11 02:46:35.756747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.572 [2024-07-11 02:46:35.757013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.572 [2024-07-11 02:46:35.757036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.572 [2024-07-11 02:46:35.757052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.572 [2024-07-11 02:46:35.761093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.572 [2024-07-11 02:46:35.770447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.572 [2024-07-11 02:46:35.770866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.572 [2024-07-11 02:46:35.770897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.572 [2024-07-11 02:46:35.770915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.572 [2024-07-11 02:46:35.771178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.572 [2024-07-11 02:46:35.771443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.572 [2024-07-11 02:46:35.771466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.572 [2024-07-11 02:46:35.771482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.572 [2024-07-11 02:46:35.775504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.572 [2024-07-11 02:46:35.784844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.572 [2024-07-11 02:46:35.785317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.572 [2024-07-11 02:46:35.785369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.572 [2024-07-11 02:46:35.785386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.572 [2024-07-11 02:46:35.785660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.572 [2024-07-11 02:46:35.785926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.572 [2024-07-11 02:46:35.785950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.572 [2024-07-11 02:46:35.785966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.572 [2024-07-11 02:46:35.790033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.572 [2024-07-11 02:46:35.799328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.572 [2024-07-11 02:46:35.799838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.572 [2024-07-11 02:46:35.799887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.572 [2024-07-11 02:46:35.799905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.572 [2024-07-11 02:46:35.800168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.572 [2024-07-11 02:46:35.800432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.572 [2024-07-11 02:46:35.800455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.572 [2024-07-11 02:46:35.800471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.572 [2024-07-11 02:46:35.804554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.572 [2024-07-11 02:46:35.813728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.572 [2024-07-11 02:46:35.814148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.572 [2024-07-11 02:46:35.814190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.572 [2024-07-11 02:46:35.814209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.572 [2024-07-11 02:46:35.814481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.572 [2024-07-11 02:46:35.814767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.572 [2024-07-11 02:46:35.814791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.572 [2024-07-11 02:46:35.814808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.572 [2024-07-11 02:46:35.818859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.828175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.828690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.828733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.828752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.829028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.829296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.829319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.829335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.833383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.842664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.843183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.843226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.843246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.843527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.843795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.843818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.843835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.847862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.857102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.857525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.857557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.857575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.857839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.858105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.858128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.858144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.862167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.871403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.871799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.871830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.871848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.872111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.872377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.872400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.872422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.876460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.885744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.886199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.886252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.886270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.886541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.886811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.886834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.886850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.890895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.900205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.900626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.900695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.900715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.900985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.901258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.901281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.901297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.905358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.914686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.915127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.915170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.915189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.915459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.915736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.915761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.915778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.919798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.929043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.929507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.929565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.929586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.929857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.930124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.930147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.930163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.934187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.943506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.944035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.944087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.944105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.944370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.944652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.944675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.944692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.948714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.957966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.958444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.958492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.958516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.958781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.959053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.959076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.959092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.963138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.573 [2024-07-11 02:46:35.972436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.573 [2024-07-11 02:46:35.972949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.573 [2024-07-11 02:46:35.972992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.573 [2024-07-11 02:46:35.973011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.573 [2024-07-11 02:46:35.973282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.573 [2024-07-11 02:46:35.973568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.573 [2024-07-11 02:46:35.973594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.573 [2024-07-11 02:46:35.973610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.573 [2024-07-11 02:46:35.977651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.574 [2024-07-11 02:46:35.986990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.574 [2024-07-11 02:46:35.987534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.574 [2024-07-11 02:46:35.987578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.574 [2024-07-11 02:46:35.987597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.574 [2024-07-11 02:46:35.987877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.574 [2024-07-11 02:46:35.988151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.574 [2024-07-11 02:46:35.988174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.574 [2024-07-11 02:46:35.988191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.832 [2024-07-11 02:46:35.992228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.832 [2024-07-11 02:46:36.001561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.832 [2024-07-11 02:46:36.002050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.832 [2024-07-11 02:46:36.002093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.832 [2024-07-11 02:46:36.002113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.832 [2024-07-11 02:46:36.002383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.832 [2024-07-11 02:46:36.002664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.832 [2024-07-11 02:46:36.002688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.832 [2024-07-11 02:46:36.002704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.832 [2024-07-11 02:46:36.006738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.832 [2024-07-11 02:46:36.016089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.832 [2024-07-11 02:46:36.016508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.832 [2024-07-11 02:46:36.016548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.832 [2024-07-11 02:46:36.016566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.832 [2024-07-11 02:46:36.016830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.832 [2024-07-11 02:46:36.017103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.832 [2024-07-11 02:46:36.017126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.832 [2024-07-11 02:46:36.017142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.832 [2024-07-11 02:46:36.021200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.832 [2024-07-11 02:46:36.030566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.832 [2024-07-11 02:46:36.030990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.832 [2024-07-11 02:46:36.031021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.832 [2024-07-11 02:46:36.031039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.832 [2024-07-11 02:46:36.031302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.832 [2024-07-11 02:46:36.031579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.832 [2024-07-11 02:46:36.031602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.832 [2024-07-11 02:46:36.031621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.832 [2024-07-11 02:46:36.035711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.832 [2024-07-11 02:46:36.044998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.832 [2024-07-11 02:46:36.045464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.832 [2024-07-11 02:46:36.045496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.832 [2024-07-11 02:46:36.045523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.832 [2024-07-11 02:46:36.045794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.832 [2024-07-11 02:46:36.046059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.832 [2024-07-11 02:46:36.046083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.832 [2024-07-11 02:46:36.046099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.832 [2024-07-11 02:46:36.050156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.832 [2024-07-11 02:46:36.059490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.832 [2024-07-11 02:46:36.059950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.832 [2024-07-11 02:46:36.060007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.832 [2024-07-11 02:46:36.060026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.832 [2024-07-11 02:46:36.060297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.832 [2024-07-11 02:46:36.060583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.832 [2024-07-11 02:46:36.060607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.832 [2024-07-11 02:46:36.060624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.832 [2024-07-11 02:46:36.064702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.833 [2024-07-11 02:46:36.073858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.833 [2024-07-11 02:46:36.074331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.833 [2024-07-11 02:46:36.074388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.833 [2024-07-11 02:46:36.074412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.833 [2024-07-11 02:46:36.074696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.833 [2024-07-11 02:46:36.074963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.833 [2024-07-11 02:46:36.074986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.833 [2024-07-11 02:46:36.075002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.833 [2024-07-11 02:46:36.079084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.833 [2024-07-11 02:46:36.088383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.833 [2024-07-11 02:46:36.088838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.833 [2024-07-11 02:46:36.088870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.833 [2024-07-11 02:46:36.088888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.833 [2024-07-11 02:46:36.089158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.833 [2024-07-11 02:46:36.089423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.833 [2024-07-11 02:46:36.089447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.833 [2024-07-11 02:46:36.089463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.833 [2024-07-11 02:46:36.093506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.833 [2024-07-11 02:46:36.102867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.833 [2024-07-11 02:46:36.103335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.833 [2024-07-11 02:46:36.103365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.833 [2024-07-11 02:46:36.103383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.833 [2024-07-11 02:46:36.103659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.833 [2024-07-11 02:46:36.103925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.833 [2024-07-11 02:46:36.103948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.833 [2024-07-11 02:46:36.103964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.833 [2024-07-11 02:46:36.108049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.833 [2024-07-11 02:46:36.117423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.833 [2024-07-11 02:46:36.117963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.833 [2024-07-11 02:46:36.118012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.833 [2024-07-11 02:46:36.118030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.833 [2024-07-11 02:46:36.118299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.833 [2024-07-11 02:46:36.118577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.833 [2024-07-11 02:46:36.118608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.833 [2024-07-11 02:46:36.118624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.833 [2024-07-11 02:46:36.122689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.833 [2024-07-11 02:46:36.131825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.833 [2024-07-11 02:46:36.132276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.833 [2024-07-11 02:46:36.132333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.833 [2024-07-11 02:46:36.132350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.833 [2024-07-11 02:46:36.132623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.833 [2024-07-11 02:46:36.132889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.834 [2024-07-11 02:46:36.132912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.834 [2024-07-11 02:46:36.132928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.834 [2024-07-11 02:46:36.136984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.834 [2024-07-11 02:46:36.146231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.834 [2024-07-11 02:46:36.146674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.834 [2024-07-11 02:46:36.146726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.834 [2024-07-11 02:46:36.146744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.834 [2024-07-11 02:46:36.147007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.834 [2024-07-11 02:46:36.147273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.834 [2024-07-11 02:46:36.147296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.834 [2024-07-11 02:46:36.147312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.834 [2024-07-11 02:46:36.151439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.834 [2024-07-11 02:46:36.160778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.834 [2024-07-11 02:46:36.161258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.834 [2024-07-11 02:46:36.161288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.834 [2024-07-11 02:46:36.161306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.834 [2024-07-11 02:46:36.161581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.834 [2024-07-11 02:46:36.161847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.834 [2024-07-11 02:46:36.161870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.834 [2024-07-11 02:46:36.161886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.834 [2024-07-11 02:46:36.165930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.834 [2024-07-11 02:46:36.175261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.834 [2024-07-11 02:46:36.175799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.834 [2024-07-11 02:46:36.175842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.834 [2024-07-11 02:46:36.175861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.834 [2024-07-11 02:46:36.176131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.834 [2024-07-11 02:46:36.176398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.834 [2024-07-11 02:46:36.176421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.834 [2024-07-11 02:46:36.176437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.834 [2024-07-11 02:46:36.180499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.834 [2024-07-11 02:46:36.189817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.834 [2024-07-11 02:46:36.190263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.834 [2024-07-11 02:46:36.190318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.834 [2024-07-11 02:46:36.190338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.834 [2024-07-11 02:46:36.190624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.834 [2024-07-11 02:46:36.190893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.834 [2024-07-11 02:46:36.190916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.834 [2024-07-11 02:46:36.190932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.834 [2024-07-11 02:46:36.194973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.834 [2024-07-11 02:46:36.204367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.834 [2024-07-11 02:46:36.204841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.834 [2024-07-11 02:46:36.204893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.834 [2024-07-11 02:46:36.204911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.834 [2024-07-11 02:46:36.205181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.834 [2024-07-11 02:46:36.205447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.834 [2024-07-11 02:46:36.205469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.834 [2024-07-11 02:46:36.205485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.834 [2024-07-11 02:46:36.209554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.834 [2024-07-11 02:46:36.218902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.834 [2024-07-11 02:46:36.219403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.834 [2024-07-11 02:46:36.219472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.834 [2024-07-11 02:46:36.219491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.834 [2024-07-11 02:46:36.219768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.834 [2024-07-11 02:46:36.220034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.834 [2024-07-11 02:46:36.220057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.834 [2024-07-11 02:46:36.220073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.834 [2024-07-11 02:46:36.224140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.834 [2024-07-11 02:46:36.233542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.834 [2024-07-11 02:46:36.233965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.834 [2024-07-11 02:46:36.234017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.834 [2024-07-11 02:46:36.234035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.834 [2024-07-11 02:46:36.234298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.835 [2024-07-11 02:46:36.234574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.835 [2024-07-11 02:46:36.234598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.835 [2024-07-11 02:46:36.234620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:45.835 [2024-07-11 02:46:36.238712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:45.835 [2024-07-11 02:46:36.248045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:45.835 [2024-07-11 02:46:36.248535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:45.835 [2024-07-11 02:46:36.248580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:45.835 [2024-07-11 02:46:36.248599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:45.835 [2024-07-11 02:46:36.248863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:45.835 [2024-07-11 02:46:36.249128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:45.835 [2024-07-11 02:46:36.249151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:45.835 [2024-07-11 02:46:36.249167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.253199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.262549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.263063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.263120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.263140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.263410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.263692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.263716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.263739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.267816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.276931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.277489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.277546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.277576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.277854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.278126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.278150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.278168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.282219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.291519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.292010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.292058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.292077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.292358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.292637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.292662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.292678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.296708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.305980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.306481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.306540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.306558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.306821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.307088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.307110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.307126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.311154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.320432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.320887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.320945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.320963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.321239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.321505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.321538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.321554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.325586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.334835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.335326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.335376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.335394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.335668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.335935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.335958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.335974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.340016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.349300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.349778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.349809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.349826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.350090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.350362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.350385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.350400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.354493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.363752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.364270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.364312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.364331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.364620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.364895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.364918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.364935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.368998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.378251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.378710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.378742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.378760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.379024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.379291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.379313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.379329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.383371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.392628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.393135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.393184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.393202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.393466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.393741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.393765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.393780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.397815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.407058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.407476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.407528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.407550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.407821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.408090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.408112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.408129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.412158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.421435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.421969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.422012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.422031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.422301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.422581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.422605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.422622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.426659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.435964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.436488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.094 [2024-07-11 02:46:36.436539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.094 [2024-07-11 02:46:36.436560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.094 [2024-07-11 02:46:36.436830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.094 [2024-07-11 02:46:36.437097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.094 [2024-07-11 02:46:36.437120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.094 [2024-07-11 02:46:36.437136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.094 [2024-07-11 02:46:36.441170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.094 [2024-07-11 02:46:36.450454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.094 [2024-07-11 02:46:36.450940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.095 [2024-07-11 02:46:36.450983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.095 [2024-07-11 02:46:36.451002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.095 [2024-07-11 02:46:36.451273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.095 [2024-07-11 02:46:36.451551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.095 [2024-07-11 02:46:36.451574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.095 [2024-07-11 02:46:36.451591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.095 [2024-07-11 02:46:36.455621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.095 [2024-07-11 02:46:36.464883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.095 [2024-07-11 02:46:36.465324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.095 [2024-07-11 02:46:36.465381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.095 [2024-07-11 02:46:36.465400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.095 [2024-07-11 02:46:36.465674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.095 [2024-07-11 02:46:36.465942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.095 [2024-07-11 02:46:36.465965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.095 [2024-07-11 02:46:36.465982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.095 [2024-07-11 02:46:36.470035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.095 [2024-07-11 02:46:36.479323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.095 [2024-07-11 02:46:36.479722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.095 [2024-07-11 02:46:36.479753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.095 [2024-07-11 02:46:36.479770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.095 [2024-07-11 02:46:36.480033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.095 [2024-07-11 02:46:36.480300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.095 [2024-07-11 02:46:36.480322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.095 [2024-07-11 02:46:36.480339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.095 [2024-07-11 02:46:36.484370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.095 [2024-07-11 02:46:36.493860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.095 [2024-07-11 02:46:36.494269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.095 [2024-07-11 02:46:36.494314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.095 [2024-07-11 02:46:36.494332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.095 [2024-07-11 02:46:36.494605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.095 [2024-07-11 02:46:36.494872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.095 [2024-07-11 02:46:36.494894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.095 [2024-07-11 02:46:36.494910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.095 [2024-07-11 02:46:36.498933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.095 [2024-07-11 02:46:36.508219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.095 [2024-07-11 02:46:36.509317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.095 [2024-07-11 02:46:36.509351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.095 [2024-07-11 02:46:36.509370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.095 [2024-07-11 02:46:36.509646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.095 [2024-07-11 02:46:36.509919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.095 [2024-07-11 02:46:36.509942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.095 [2024-07-11 02:46:36.509958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.353 [2024-07-11 02:46:36.513975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.353 [2024-07-11 02:46:36.522770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.353 [2024-07-11 02:46:36.523765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.353 [2024-07-11 02:46:36.523797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.353 [2024-07-11 02:46:36.523815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.353 [2024-07-11 02:46:36.524081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.353 [2024-07-11 02:46:36.524349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.353 [2024-07-11 02:46:36.524371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.353 [2024-07-11 02:46:36.524387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.353 [2024-07-11 02:46:36.528414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.353 [2024-07-11 02:46:36.537218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.353 [2024-07-11 02:46:36.537611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.353 [2024-07-11 02:46:36.537642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.353 [2024-07-11 02:46:36.537660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.353 [2024-07-11 02:46:36.537924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.353 [2024-07-11 02:46:36.538190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.353 [2024-07-11 02:46:36.538211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.353 [2024-07-11 02:46:36.538227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.353 [2024-07-11 02:46:36.542263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.353 [2024-07-11 02:46:36.551754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.353 [2024-07-11 02:46:36.552225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.353 [2024-07-11 02:46:36.552276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.353 [2024-07-11 02:46:36.552294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.353 [2024-07-11 02:46:36.552572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.353 [2024-07-11 02:46:36.552838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.353 [2024-07-11 02:46:36.552860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.353 [2024-07-11 02:46:36.552876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.353 [2024-07-11 02:46:36.556894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.353 [2024-07-11 02:46:36.566200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.353 [2024-07-11 02:46:36.566624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.353 [2024-07-11 02:46:36.566668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.353 [2024-07-11 02:46:36.566686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.353 [2024-07-11 02:46:36.566949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.353 [2024-07-11 02:46:36.567214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.567236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.567252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.571273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.580550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.581039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.581089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.581106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.581368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.581648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.581670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.581686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.585716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.594970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.595502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.595564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.595583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.595860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.596127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.596149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.596165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.600193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.609448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.609957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.609998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.610023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.610293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.610573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.610597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.610613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.614640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.623910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.624388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.624447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.624467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.624754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.625022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.625044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.625060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.629084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.638344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.638853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.638894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.638914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.639197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.639469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.639491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.639507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.643554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.652880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.653384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.653458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.653477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.653760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.654028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.654055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.654071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.658101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.667349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.667818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.667849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.667867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.668130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.668396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.668418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.668434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.672488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.681754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.682212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.682242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.682259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.682532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.682799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.682821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.682836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.686851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.696094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.696563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.696594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.696611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.696874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.697140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.697162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.697177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.701215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.710485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.710960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.711030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.711049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.711319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.711598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.354 [2024-07-11 02:46:36.711621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.354 [2024-07-11 02:46:36.711637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.354 [2024-07-11 02:46:36.715668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.354 [2024-07-11 02:46:36.724946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.354 [2024-07-11 02:46:36.725409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.354 [2024-07-11 02:46:36.725440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.354 [2024-07-11 02:46:36.725457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.354 [2024-07-11 02:46:36.725731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.354 [2024-07-11 02:46:36.725997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.355 [2024-07-11 02:46:36.726019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.355 [2024-07-11 02:46:36.726035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.355 [2024-07-11 02:46:36.730052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.355 [2024-07-11 02:46:36.739343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.355 [2024-07-11 02:46:36.739878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.355 [2024-07-11 02:46:36.739927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.355 [2024-07-11 02:46:36.739945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.355 [2024-07-11 02:46:36.740208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.355 [2024-07-11 02:46:36.740473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.355 [2024-07-11 02:46:36.740495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.355 [2024-07-11 02:46:36.740520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.355 [2024-07-11 02:46:36.744568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.355 [2024-07-11 02:46:36.753878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.355 [2024-07-11 02:46:36.754281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.355 [2024-07-11 02:46:36.754334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.355 [2024-07-11 02:46:36.754352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.355 [2024-07-11 02:46:36.754631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.355 [2024-07-11 02:46:36.754898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.355 [2024-07-11 02:46:36.754919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.355 [2024-07-11 02:46:36.754936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.355 [2024-07-11 02:46:36.758963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.355 [2024-07-11 02:46:36.768239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.355 [2024-07-11 02:46:36.768765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.355 [2024-07-11 02:46:36.768807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.355 [2024-07-11 02:46:36.768827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.355 [2024-07-11 02:46:36.769097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.355 [2024-07-11 02:46:36.769364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.355 [2024-07-11 02:46:36.769386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.355 [2024-07-11 02:46:36.769401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.612 [2024-07-11 02:46:36.773431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.612 [2024-07-11 02:46:36.782730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.612 [2024-07-11 02:46:36.783277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.612 [2024-07-11 02:46:36.783333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.612 [2024-07-11 02:46:36.783352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.612 [2024-07-11 02:46:36.783641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.783909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.783931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.783947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.787972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.797259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.797712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.797743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.797761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.798025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.798292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.798314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.798335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.802374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.811682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.812158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.812208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.812226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.812488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.812764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.812786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.812802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.816865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.826189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.826665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.826751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.826769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.827032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.827297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.827318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.827334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.831370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.840655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.841214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.841268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.841287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.841573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.841841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.841863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.841879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.845979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.855028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.855507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.855568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.855592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.855855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.856121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.856143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.856158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.860232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.869497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.870088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.870129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.870148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.870417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.870696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.870719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.870736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.874778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.883844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.884273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.884314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.884333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.884616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.884884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.884906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.884921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.888952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.898228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.898695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.898767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.898787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.899057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.899330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.899352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.899368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.903412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.912696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.913220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.913274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.913293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.913576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.913844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.913866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.913882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.917912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.927189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.927693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.927734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.613 [2024-07-11 02:46:36.927753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.613 [2024-07-11 02:46:36.928023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.613 [2024-07-11 02:46:36.928296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.613 [2024-07-11 02:46:36.928319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.613 [2024-07-11 02:46:36.928335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.613 [2024-07-11 02:46:36.932365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.613 [2024-07-11 02:46:36.941655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.613 [2024-07-11 02:46:36.942067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.613 [2024-07-11 02:46:36.942118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.614 [2024-07-11 02:46:36.942136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.614 [2024-07-11 02:46:36.942399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.614 [2024-07-11 02:46:36.942674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.614 [2024-07-11 02:46:36.942697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.614 [2024-07-11 02:46:36.942713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.614 [2024-07-11 02:46:36.946766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.614 [2024-07-11 02:46:36.956044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.614 [2024-07-11 02:46:36.956546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.614 [2024-07-11 02:46:36.956593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.614 [2024-07-11 02:46:36.956611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.614 [2024-07-11 02:46:36.956875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.614 [2024-07-11 02:46:36.957140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.614 [2024-07-11 02:46:36.957161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.614 [2024-07-11 02:46:36.957177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.614 [2024-07-11 02:46:36.961215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.614 [2024-07-11 02:46:36.970485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.614 [2024-07-11 02:46:36.970980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.614 [2024-07-11 02:46:36.971040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.614 [2024-07-11 02:46:36.971060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.614 [2024-07-11 02:46:36.971336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.614 [2024-07-11 02:46:36.971622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.614 [2024-07-11 02:46:36.971644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.614 [2024-07-11 02:46:36.971661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.614 [2024-07-11 02:46:36.975722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.614 [2024-07-11 02:46:36.984979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1977781 Killed "${NVMF_APP[@]}" "$@" 00:40:46.614 [2024-07-11 02:46:36.985464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.614 [2024-07-11 02:46:36.985522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.614 [2024-07-11 02:46:36.985542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:40:46.614 [2024-07-11 02:46:36.985806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:46.614 [2024-07-11 02:46:36.986072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.614 [2024-07-11 02:46:36.986094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.614 [2024-07-11 02:46:36.986110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1978507 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:46.614 [2024-07-11 02:46:36.990149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1978507 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1978507 ']' 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:46.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:46.614 02:46:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:46.614 [2024-07-11 02:46:36.999431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.614 [2024-07-11 02:46:36.999918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.614 [2024-07-11 02:46:36.999961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.614 [2024-07-11 02:46:36.999982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.614 [2024-07-11 02:46:37.000267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.614 [2024-07-11 02:46:37.000546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.614 [2024-07-11 02:46:37.000574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.614 [2024-07-11 02:46:37.000591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.614 [2024-07-11 02:46:37.004654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.614 [2024-07-11 02:46:37.013921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.614 [2024-07-11 02:46:37.014348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.614 [2024-07-11 02:46:37.014379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.614 [2024-07-11 02:46:37.014398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.614 [2024-07-11 02:46:37.014670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.614 [2024-07-11 02:46:37.014937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.614 [2024-07-11 02:46:37.014959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.614 [2024-07-11 02:46:37.014975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.614 [2024-07-11 02:46:37.018997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.614 [2024-07-11 02:46:37.028252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.614 [2024-07-11 02:46:37.028663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.614 [2024-07-11 02:46:37.028706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.614 [2024-07-11 02:46:37.028734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.614 [2024-07-11 02:46:37.029006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.614 [2024-07-11 02:46:37.029274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.614 [2024-07-11 02:46:37.029295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.614 [2024-07-11 02:46:37.029311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.614 [2024-07-11 02:46:37.030770] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:40:46.614 [2024-07-11 02:46:37.030862] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:46.873 [2024-07-11 02:46:37.033340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.042651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.043100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.043135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.043155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.043427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.043715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.043739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.043757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 [2024-07-11 02:46:37.047864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.057091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.057533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.057566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.057584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.057855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.058127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.058155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.058173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 EAL: No free 2048 kB hugepages reported on node 1 00:40:46.873 [2024-07-11 02:46:37.062527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.071655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.072183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.072231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.072261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.072558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.072840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.072862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.072879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 [2024-07-11 02:46:37.076948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.085998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.086417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.086448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.086466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.086739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.087006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.087028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.087044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 [2024-07-11 02:46:37.090969] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:46.873 [2024-07-11 02:46:37.091069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.100502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.101043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.101081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.101101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.101379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.101665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.101688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.101707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 [2024-07-11 02:46:37.105769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.115094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.115656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.115697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.115719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.116000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.116281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.116314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.116332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 [2024-07-11 02:46:37.120472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.129749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.130258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.130296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.130317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.130602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.130883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.130907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.130926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 [2024-07-11 02:46:37.135074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.144405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.145012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.145064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.145086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.145368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.145666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.145690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.145708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 [2024-07-11 02:46:37.149803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.158867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.159401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.159454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.159476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.159766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.160042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.160064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.160082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 [2024-07-11 02:46:37.164131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.173420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.873 [2024-07-11 02:46:37.173971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.873 [2024-07-11 02:46:37.174025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.873 [2024-07-11 02:46:37.174047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.873 [2024-07-11 02:46:37.174325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.873 [2024-07-11 02:46:37.174605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.873 [2024-07-11 02:46:37.174631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.873 [2024-07-11 02:46:37.174649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.873 [2024-07-11 02:46:37.178707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.873 [2024-07-11 02:46:37.180472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:46.873 [2024-07-11 02:46:37.180508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:46.874 [2024-07-11 02:46:37.180535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:46.874 [2024-07-11 02:46:37.180549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:46.874 [2024-07-11 02:46:37.180561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:46.874 [2024-07-11 02:46:37.180650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:46.874 [2024-07-11 02:46:37.180804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:40:46.874 [2024-07-11 02:46:37.180838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:46.874 [2024-07-11 02:46:37.188096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.874 [2024-07-11 02:46:37.188635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.874 [2024-07-11 02:46:37.188675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.874 [2024-07-11 02:46:37.188697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.874 [2024-07-11 02:46:37.188977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.874 [2024-07-11 02:46:37.189250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.874 [2024-07-11 02:46:37.189272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.874 [2024-07-11 02:46:37.189290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.874 [2024-07-11 02:46:37.193381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.874 [2024-07-11 02:46:37.202612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.874 [2024-07-11 02:46:37.203148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.874 [2024-07-11 02:46:37.203187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.874 [2024-07-11 02:46:37.203208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.874 [2024-07-11 02:46:37.203488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.874 [2024-07-11 02:46:37.203775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.874 [2024-07-11 02:46:37.203812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.874 [2024-07-11 02:46:37.203832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.874 [2024-07-11 02:46:37.207983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.874 [2024-07-11 02:46:37.217188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.874 [2024-07-11 02:46:37.217733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.874 [2024-07-11 02:46:37.217771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.874 [2024-07-11 02:46:37.217792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.874 [2024-07-11 02:46:37.218073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.874 [2024-07-11 02:46:37.218343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.874 [2024-07-11 02:46:37.218365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.874 [2024-07-11 02:46:37.218383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.874 [2024-07-11 02:46:37.222521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.874 [2024-07-11 02:46:37.231663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.874 [2024-07-11 02:46:37.232185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.874 [2024-07-11 02:46:37.232222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.874 [2024-07-11 02:46:37.232242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.874 [2024-07-11 02:46:37.232523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.874 [2024-07-11 02:46:37.232801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.874 [2024-07-11 02:46:37.232823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.874 [2024-07-11 02:46:37.232841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.874 [2024-07-11 02:46:37.236909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.874 [2024-07-11 02:46:37.246110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.874 [2024-07-11 02:46:37.246657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.874 [2024-07-11 02:46:37.246696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.874 [2024-07-11 02:46:37.246716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.874 [2024-07-11 02:46:37.246990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.874 [2024-07-11 02:46:37.247261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.874 [2024-07-11 02:46:37.247283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.874 [2024-07-11 02:46:37.247302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.874 [2024-07-11 02:46:37.251375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.874 [2024-07-11 02:46:37.260703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.874 [2024-07-11 02:46:37.261205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.874 [2024-07-11 02:46:37.261243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.874 [2024-07-11 02:46:37.261263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.874 [2024-07-11 02:46:37.261542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.874 [2024-07-11 02:46:37.261811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.874 [2024-07-11 02:46:37.261833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.874 [2024-07-11 02:46:37.261850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.874 [2024-07-11 02:46:37.265873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.874 [2024-07-11 02:46:37.275124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.874 [2024-07-11 02:46:37.275497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.874 [2024-07-11 02:46:37.275534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.874 [2024-07-11 02:46:37.275552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.874 [2024-07-11 02:46:37.275816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.874 [2024-07-11 02:46:37.276082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.874 [2024-07-11 02:46:37.276104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.874 [2024-07-11 02:46:37.276120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:46.874 [2024-07-11 02:46:37.280142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:46.874 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:46.874 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:40:46.874 02:46:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:46.874 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:46.874 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:46.874 [2024-07-11 02:46:37.289631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:46.874 [2024-07-11 02:46:37.290041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:46.874 [2024-07-11 02:46:37.290072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:46.874 [2024-07-11 02:46:37.290091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:46.874 [2024-07-11 02:46:37.290355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:46.874 [2024-07-11 02:46:37.290632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:46.874 [2024-07-11 02:46:37.290654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:46.874 [2024-07-11 02:46:37.290670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:47.132 [2024-07-11 02:46:37.294687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:47.132 [2024-07-11 02:46:37.303939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:47.132 [2024-07-11 02:46:37.304319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:47.132 [2024-07-11 02:46:37.304349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:47.132 [2024-07-11 02:46:37.304367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:47.132 [2024-07-11 02:46:37.304641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:47.132 [2024-07-11 02:46:37.304907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:47.132 [2024-07-11 02:46:37.304936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:47.132 [2024-07-11 02:46:37.304952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:47.132 [2024-07-11 02:46:37.308971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:47.132 [2024-07-11 02:46:37.318443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:47.132 [2024-07-11 02:46:37.318829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:47.132 [2024-07-11 02:46:37.318858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:47.132 [2024-07-11 02:46:37.318875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:47.132 [2024-07-11 02:46:37.319138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:47.132 [2024-07-11 02:46:37.319403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:47.132 [2024-07-11 02:46:37.319425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:47.132 [2024-07-11 02:46:37.319441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:47.132 [2024-07-11 02:46:37.322488] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:47.132 [2024-07-11 02:46:37.323474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:47.132 [2024-07-11 02:46:37.333078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:47.132 [2024-07-11 02:46:37.333486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:47.132 [2024-07-11 02:46:37.333527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:47.132 [2024-07-11 02:46:37.333548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:47.132 [2024-07-11 02:46:37.333821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:47.132 [2024-07-11 02:46:37.334088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:47.132 [2024-07-11 02:46:37.334114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:47.132 [2024-07-11 02:46:37.334132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:47.132 [2024-07-11 02:46:37.338181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:47.132 [2024-07-11 02:46:37.347534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:47.132 [2024-07-11 02:46:37.348082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:47.132 [2024-07-11 02:46:37.348122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:47.132 [2024-07-11 02:46:37.348143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:47.132 [2024-07-11 02:46:37.348416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:47.132 [2024-07-11 02:46:37.348713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:47.132 [2024-07-11 02:46:37.348736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:47.132 [2024-07-11 02:46:37.348754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:47.132 [2024-07-11 02:46:37.352811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:47.132 Malloc0 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.132 02:46:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:47.133 [2024-07-11 02:46:37.362430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:47.133 [2024-07-11 02:46:37.362921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:47.133 [2024-07-11 02:46:37.362958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:47.133 [2024-07-11 02:46:37.362979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:47.133 [2024-07-11 02:46:37.363248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:47.133 [2024-07-11 02:46:37.363529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:47.133 [2024-07-11 02:46:37.363553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:47.133 [2024-07-11 02:46:37.363570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:47.133 [2024-07-11 02:46:37.367843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:47.133 [2024-07-11 02:46:37.377164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:47.133 [2024-07-11 02:46:37.377574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:47.133 [2024-07-11 02:46:37.377606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae8a40 with addr=10.0.0.2, port=4420 00:40:47.133 [2024-07-11 02:46:37.377625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8a40 is same with the state(5) to be set 00:40:47.133 [2024-07-11 02:46:37.377700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:47.133 [2024-07-11 02:46:37.377889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae8a40 (9): Bad file descriptor 00:40:47.133 [2024-07-11 02:46:37.378155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:47.133 [2024-07-11 02:46:37.378177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:47.133 [2024-07-11 02:46:37.378193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.133 02:46:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1977944 00:40:47.133 [2024-07-11 02:46:37.382272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:47.133 [2024-07-11 02:46:37.391662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:47.133 [2024-07-11 02:46:37.516058] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:57.098 00:40:57.098 Latency(us) 00:40:57.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:57.098 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:57.098 Verification LBA range: start 0x0 length 0x4000 00:40:57.098 Nvme1n1 : 15.01 5689.78 22.23 7571.83 0.00 9622.09 970.90 27962.03 00:40:57.098 =================================================================================================================== 00:40:57.098 Total : 5689.78 22.23 7571.83 0.00 9622.09 970.90 27962.03 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:57.098 rmmod nvme_tcp 00:40:57.098 rmmod nvme_fabrics 00:40:57.098 rmmod nvme_keyring 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1978507 ']' 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1978507 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1978507 ']' 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1978507 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1978507 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1978507' 00:40:57.098 killing process with pid 1978507 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1978507 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1978507 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:57.098 02:46:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.004 02:46:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:59.004 00:40:59.004 real 0m21.697s 00:40:59.004 user 0m58.694s 00:40:59.004 sys 0m3.977s 00:40:59.004 02:46:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:59.005 02:46:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:59.005 ************************************ 00:40:59.005 END TEST nvmf_bdevperf 00:40:59.005 ************************************ 00:40:59.005 02:46:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:40:59.005 02:46:48 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:40:59.005 02:46:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:59.005 02:46:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:59.005 02:46:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:59.005 ************************************ 00:40:59.005 START TEST nvmf_target_disconnect 00:40:59.005 ************************************ 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:40:59.005 * Looking for test storage... 00:40:59.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:40:59.005 02:46:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:41:00.382 Found 0000:08:00.0 (0x8086 - 0x159b) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:41:00.382 Found 0000:08:00.1 (0x8086 - 0x159b) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:41:00.382 Found net devices under 0000:08:00.0: cvl_0_0 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:41:00.382 Found net devices under 0000:08:00.1: cvl_0_1 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:00.382 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:00.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:00.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:41:00.383 00:41:00.383 --- 10.0.0.2 ping statistics --- 00:41:00.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:00.383 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:00.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:00.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:41:00.383 00:41:00.383 --- 10.0.0.1 ping statistics --- 00:41:00.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:00.383 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:00.383 ************************************ 00:41:00.383 START TEST nvmf_target_disconnect_tc1 00:41:00.383 ************************************ 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:41:00.383 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:00.642 EAL: No free 2048 kB hugepages reported on node 1 00:41:00.642 [2024-07-11 02:46:50.875568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:00.642 [2024-07-11 02:46:50.875684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1abd0c0 with addr=10.0.0.2, port=4420 00:41:00.642 [2024-07-11 02:46:50.875718] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:41:00.642 [2024-07-11 02:46:50.875744] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:00.642 [2024-07-11 02:46:50.875759] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:41:00.642 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:41:00.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:41:00.642 Initializing NVMe Controllers 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:00.642 00:41:00.642 real 0m0.099s 00:41:00.642 user 0m0.036s 00:41:00.642 sys 0m0.061s 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:41:00.642 ************************************ 00:41:00.642 END TEST nvmf_target_disconnect_tc1 00:41:00.642 ************************************ 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:00.642 ************************************ 00:41:00.642 START TEST nvmf_target_disconnect_tc2 00:41:00.642 ************************************ 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1980927 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1980927 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1980927 ']' 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:00.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:00.642 02:46:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:00.642 [2024-07-11 02:46:50.997446] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:41:00.642 [2024-07-11 02:46:50.997542] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:00.642 EAL: No free 2048 kB hugepages reported on node 1 00:41:00.642 [2024-07-11 02:46:51.062250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:00.900 [2024-07-11 02:46:51.151266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:00.900 [2024-07-11 02:46:51.151326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:00.900 [2024-07-11 02:46:51.151343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:00.900 [2024-07-11 02:46:51.151360] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:00.900 [2024-07-11 02:46:51.151373] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:00.900 [2024-07-11 02:46:51.151475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:41:00.900 [2024-07-11 02:46:51.151547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:41:00.900 [2024-07-11 02:46:51.151626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:41:00.900 [2024-07-11 02:46:51.151630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:00.900 Malloc0 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:00.900 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:00.900 [2024-07-11 02:46:51.320912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:01.157 [2024-07-11 02:46:51.349124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1980965 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:41:01.157 02:46:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:01.157 EAL: No free 2048 kB hugepages reported on node 1 00:41:03.067 02:46:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1980927 00:41:03.067 02:46:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 [2024-07-11 02:46:53.376060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 [2024-07-11 02:46:53.376433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Write completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.067 starting I/O failed 00:41:03.067 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 [2024-07-11 02:46:53.376801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Write completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 Read completed with error (sct=0, sc=8) 00:41:03.068 starting I/O failed 00:41:03.068 [2024-07-11 02:46:53.377132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:03.068 [2024-07-11 02:46:53.377270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.377303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.377403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.377431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.377528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.377556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.377657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.377685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.377844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.377900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.378035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.378068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.378175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.378202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.378326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.378353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.378469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.378497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.378660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.378687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.378810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.378839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.379052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.379100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.379256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.379307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.379496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.379538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.379731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.379759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.379935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.379991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.380185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.380212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.380416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.380443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.380574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.380634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.380756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.380785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.381016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.381065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.381258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.381313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.381479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.381545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.381709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.381769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.068 qpair failed and we were unable to recover it. 00:41:03.068 [2024-07-11 02:46:53.381955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.068 [2024-07-11 02:46:53.382015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.382229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.382256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.382422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.382471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.382651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.382703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.382895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.382945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.383044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.383072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.383171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.383198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.383327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.383354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.383491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.383528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.383643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.383669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.383782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.383808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.383926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.383952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.384056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.384083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.384209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.384252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.384373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.384401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.384518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.384550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.384646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.384673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.384776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.384803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.384929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.384969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.385074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.385116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.385240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.385297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.385393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.385421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.385532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.385576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.385682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.385710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.385867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.385917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.386017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.386045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.386226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.386253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.386391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.386442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.386544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.386573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.386703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.386734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.386857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.386884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.386981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.387009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.387167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.387216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.387388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.387415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.387546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.387574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.387699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.387729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.388446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.388474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.388619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.388655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.388754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.388781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.388879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.388907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.389011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.069 [2024-07-11 02:46:53.389039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.069 qpair failed and we were unable to recover it. 00:41:03.069 [2024-07-11 02:46:53.389166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.389193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.389297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.389323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.389416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.389444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.389540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.389567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.389678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.389720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.389819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.389845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.389931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.389959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.390069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.390095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.390186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.390213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.390304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.390330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.390440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.390466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.390588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.390616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.390702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.390729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.390870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.390912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.391067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.391118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.391258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.391299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.391388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.391415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.391527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.391567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.391735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.391766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.391895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.391951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.392053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.392082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.392237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.392289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.392417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.392468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.392574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.392604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.392779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.392807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.392927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.392955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.393145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.393174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.393279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.393306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.393412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.393454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.393588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.393630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.393719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.393746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.393917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.393971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.394064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.394093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.394234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.394286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.394415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.394473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.394655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.394698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.394826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.394876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.394976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.395004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.395111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.395152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.395239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.395266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.070 [2024-07-11 02:46:53.395381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.070 [2024-07-11 02:46:53.395444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.070 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.395562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.395592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.395688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.395716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.395845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.395891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.395975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.396002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.396107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.396134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.396231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.396260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.396362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.396391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.396487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.396524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.396634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.396660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.396778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.396805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.396939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.396966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.397057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.397085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.397242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.397270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.397388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.397448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.397541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.397569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.397670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.397697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.397780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.397807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.397930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.397972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.398066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.398095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.398215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.398254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.398349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.398378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.398490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.398564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.398692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.398736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.398847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.398889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.398995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.399022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.399152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.399193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.399310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.399340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.399440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.399469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.399703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.399746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.399841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.399871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.399960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.399988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.400100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.400141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.400259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.400317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.071 [2024-07-11 02:46:53.400413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.071 [2024-07-11 02:46:53.400441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.071 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.400531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.400558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.400670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.400701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.400809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.400836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.400935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.400962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.401064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.401090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.401181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.401210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.401406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.401459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.401575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.401632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.401734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.401764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.401906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.401961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.402083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.402128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.402250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.402292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.402394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.402423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.402526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.402555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.402638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.402665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.402770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.402796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.402914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.402957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.403057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.403087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.403182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.403208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.403309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.403335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.403444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.403485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.403678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.403735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.403912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.403942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.404039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.404066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.404189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.404244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.404334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.404362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.404474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.404523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.404637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.404664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.404788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.404834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.404978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.405030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.405168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.405213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.405337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.405399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.405533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.405575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.405687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.405716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.405834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.405875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.405965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.405992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.406076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.406103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.406216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.406246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.406364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.406426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.406519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.406546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.072 [2024-07-11 02:46:53.406644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.072 [2024-07-11 02:46:53.406673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.072 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.406796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.406837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.406926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.406953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.407078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.407133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.407248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.407289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.407410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.407467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.407585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.407613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.407728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.407787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.407876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.407903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.408052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.408105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.408203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.408231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.408355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.408396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.408525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.408581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.408759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.408788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.408898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.408939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.409057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.409093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.409198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.409227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.409333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.409362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.409484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.409548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.409672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.409699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.409871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.409899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.409984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.410010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.410125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.410153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.410244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.410272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.410367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.410394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.410485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.410526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.410620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.410646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.410755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.410785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.410908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.410935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.411109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.411138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.411228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.411255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.411383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.411436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.411524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.411551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.411653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.411717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.411834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.411890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.411978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.412005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.412107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.412148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.412268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.412327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.412442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.412484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.412599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.412640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.412724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.412751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.073 qpair failed and we were unable to recover it. 00:41:03.073 [2024-07-11 02:46:53.412851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.073 [2024-07-11 02:46:53.412879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.413007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.413040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.413154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.413182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.413297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.413338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.413456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.413499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.413625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.413679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.413798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.413828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.413932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.413959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.414058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.414087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.414192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.414258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.414375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.414406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.414502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.414546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.414636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.414661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.414776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.414817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.414908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.414933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.415029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.415055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.415163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.415203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.415308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.415349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.415452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.415478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.415631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.415688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.415772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.415800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.415915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.415975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.416075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.416102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.416192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.416219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.416323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.416349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.416432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.416459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.416566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.416607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.416698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.416727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.416824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.416857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.416956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.416984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.417083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.417110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.417202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.417230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.417322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.417350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.417451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.417478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.417571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.417598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.417695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.417721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.417807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.417834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.417926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.417953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.418041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.418068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.418170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.418199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.418289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.074 [2024-07-11 02:46:53.418316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.074 qpair failed and we were unable to recover it. 00:41:03.074 [2024-07-11 02:46:53.418414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.418441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.418534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.418561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.418674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.418729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.418863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.418918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.419002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.419029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.419139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.419165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.419249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.419278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.419375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.419403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.419500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.419538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.419671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.419722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.419857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.419909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.420001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.420029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.420145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.420204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.420317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.420370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.420534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.420587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.420718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.420792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.420925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.420972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.421122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.421151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.421270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.421301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.421428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.421482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.421664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.421718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.421837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.421864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.422015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.422045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.422182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.422244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.422338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.422365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.422488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.422567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.422699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.422763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.422912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.422962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.423108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.423162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.423301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.423368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.423484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.423546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.423712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.423762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.423883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.423912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.424041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.424091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.424208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.424238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.424365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.075 [2024-07-11 02:46:53.424416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.075 qpair failed and we were unable to recover it. 00:41:03.075 [2024-07-11 02:46:53.424532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.424573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.424734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.424763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.424955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.425002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.425151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.425198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.425290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.425318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.425428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.425455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.425600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.425649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.425788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.425844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.426006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.426033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.426122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.426148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.426232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.426259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.426375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.426416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.426578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.426635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.426776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.426832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.426944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.427005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.427097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.427123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.427252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.427306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.427544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.427572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.427665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.427698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.427816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.427858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.427987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.428040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.428129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.428156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.428285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.428332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.428473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.428528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.428653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.428708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.428839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.428888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.429013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.429057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.429171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.429225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.429380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.429444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.429629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.429656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.429737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.429763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.429901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.429928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.430026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.430052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.430210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.430268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.430388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.430417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.430534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.430561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.430681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.430707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.430875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.430929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.431031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.431072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.431199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.431259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.431352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.076 [2024-07-11 02:46:53.431379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.076 qpair failed and we were unable to recover it. 00:41:03.076 [2024-07-11 02:46:53.431495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.431560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.431646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.431673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.431773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.431852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.431980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.432030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.432135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.432175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.432313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.432367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.432458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.432487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.432667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.432693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.432820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.432875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.432999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.433060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.433180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.433238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.433353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.433406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.433536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.433587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.433763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.433792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.433895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.433922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.434007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.434036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.434145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.434206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.434296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.434323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.434443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.434485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.434614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.434693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.434819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.434874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.434994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.435046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.435136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.435162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.435283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.435339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.435421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.435448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.435548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.435575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.435686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.435742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.435850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.435902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.435989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.436016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.436145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.436217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.436323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.436352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.436636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.436679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.436783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.436812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.436906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.436934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.437024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.437051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.437187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.437227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.437310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.437337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.437454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.437484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.437635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.437693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.437781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.437809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.077 [2024-07-11 02:46:53.437912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.077 [2024-07-11 02:46:53.437939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.077 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.438080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.438137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.438255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.438308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.438422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.438483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.438582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.438611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.438789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.438846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.438941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.438970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.439085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.439143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.439231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.439257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.439344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.439371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.439521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.439571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.439741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.439767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.439886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.439956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.440054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.440080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.440190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.440231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.440327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.440356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.440442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.440469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.440604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.440659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.440774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.440829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.440935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.440995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.441080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.441107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.441206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.441234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.441345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.441401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.441490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.441532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.441616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.441642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.441776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.441817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.441932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.441993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.442104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.442161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.442245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.442271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.442387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.442429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.442528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.442555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.442686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.442733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.442817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.442843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.442926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.442952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.443045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.443071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.443193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.443243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.443336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.443363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.443469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.443535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.443660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.443702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.443795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.443821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.443948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.443993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.078 [2024-07-11 02:46:53.444123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.078 [2024-07-11 02:46:53.444182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.078 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.444296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.444346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.444449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.444480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.444691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.444749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.444903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.444957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.445052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.445079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.445192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.445243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.445362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.445394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.445497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.445533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.445657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.445708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.445864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.445892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.445976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.446004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.446101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.446129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.446221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.446248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.446367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.446409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.446498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.446533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.446622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.446649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.446735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.446768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.446859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.446887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.446981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.447007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.447108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.447135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.447222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.447248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.447351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.447412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.447502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.447539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.447657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.447684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.447851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.447909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.448041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.448098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.448197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.448226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.448314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.448341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.448463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.448517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.448652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.448704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.448800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.448828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.448925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.448952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.449119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.449146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.449230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.449257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.449361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.449389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.449492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.449555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.449654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.449682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.449779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.449806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.449915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.449944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.450061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.450093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.079 [2024-07-11 02:46:53.450216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.079 [2024-07-11 02:46:53.450278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.079 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.450374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.450400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.450495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.450530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.450619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.450646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.450769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.450819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.450934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.450964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.451077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.451117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.451233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.451295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.451396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.451425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.451540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.451596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.451741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.451796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.451919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.451967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.452066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.452093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.452178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.452204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.452314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.452372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.452481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.452517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.452634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.452696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.452867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.452894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.453060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.453086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.453205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.453265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.453350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.453376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.453492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.453560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.453691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.453731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.453865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.453907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.454021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.454084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.454170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.454197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.454319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.454371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.454460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.454489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.454620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.454681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.454779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.454807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.454958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.454984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.455077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.455106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.455217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.455269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.455350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.455376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.455456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.455482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.080 qpair failed and we were unable to recover it. 00:41:03.080 [2024-07-11 02:46:53.455616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.080 [2024-07-11 02:46:53.455700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.455799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.455825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.455957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.456007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.456122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.456162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.456257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.456286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.456404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.456456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.456548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.456576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.456665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.456692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.456815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.456877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.456970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.456998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.457082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.457111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.457219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.457270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.457352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.457379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.457536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.457562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.457651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.457679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.457798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.457840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.457943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.458004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.458114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.458162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.458281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.458325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.458434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.458463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.458617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.458675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.458850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.458877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.458992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.459052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.459162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.459192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.459287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.459313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.459472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.459501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.459649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.459707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.459827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.459853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.459936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.459962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.460083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.460134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.460239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.460267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.460402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.460450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.460550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.460610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.460770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.460826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.460980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.461040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.461129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.461161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.461265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.461292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.461392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.461456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.461619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.461687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.461842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.461909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.462006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.081 [2024-07-11 02:46:53.462035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.081 qpair failed and we were unable to recover it. 00:41:03.081 [2024-07-11 02:46:53.462174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.462233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.462345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.462401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.462483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.462519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.462658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.462716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.462809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.462836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.462959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.463012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.463097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.463123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.463216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.463243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.463375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.463430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.463537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.463567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.463725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.463752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.463842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.463869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.463954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.463981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.464070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.464098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.464199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.464226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.464316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.464344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.464442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.464469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.464591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.464641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.464758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.464818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.464903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.464930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.465064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.465117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.465207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.465234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.465316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.465342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.465423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.465449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.465542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.465569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.465718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.465745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.465830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.465857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.465942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.465970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.466077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.466105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.466209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.466236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.466323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.466350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.466450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.466477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.466596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.466623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.466748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.466775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.466860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.466892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.466999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.467059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.467183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.467269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.467386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.467437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.467530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.467558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.467661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.467687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.467827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.467884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.082 [2024-07-11 02:46:53.468011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.082 [2024-07-11 02:46:53.468066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.082 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.468199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.468255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.468412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.468479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.468595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.468660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.468805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.468859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.468950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.468977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.469098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.469152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.469243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.469270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.469387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.469443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.469541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.469569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.469706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.469762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.469876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.469925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.470015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.470041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.470129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.470157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.470293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.470349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.470442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.470469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.470616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.470643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.470727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.470754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.470873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.470928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.471029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.471081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.471202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.471292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.471411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.471465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.471591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.471644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.471770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.471814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.471900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.471926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.472041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.472091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.472195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.472249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.472335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.472362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.472458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.472486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.472586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.472616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.472710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.472737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.472821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.472847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.472984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.473037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.473120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.473147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.473277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.473337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.473466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.473537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.473659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.473714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.473803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.473829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.473943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.473990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.474075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.474102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.474228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.474292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.474403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.474432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.474531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.474560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.474657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.474686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.474782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.474809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.474893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.474920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.475004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.475031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.475125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.475154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.475270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.475319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.475437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.475499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.475597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.475624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.475710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.475738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.475842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.475869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.475986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.476042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.476145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.476200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.083 [2024-07-11 02:46:53.476311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.083 [2024-07-11 02:46:53.476363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.083 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.476491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.476545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.476638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.476664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.476772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.476835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.476955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.477007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.477118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.477167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.477253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.477280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.477363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.477389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.477479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.477506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.477626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.477686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.477804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.477854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.477945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.477972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.478171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.478198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.478281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.478308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.084 [2024-07-11 02:46:53.478398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.084 [2024-07-11 02:46:53.478424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.084 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.478622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.478650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.478764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.478817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.478906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.478933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.479046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.479104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.479218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.479280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.479367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.479396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.479533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.479600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.479691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.479717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.479876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.479928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.480128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.480156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.480268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.480320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.480408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.480436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.480527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.480555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.480645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.480671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.480753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.480779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.480870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.480898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.481009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.481058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.481150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.481182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.481272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.481298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.481378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.481405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.481530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.481593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.481678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.481704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.481802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.481868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.481986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.482041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.482156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.482212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.482326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.482373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.482453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.359 [2024-07-11 02:46:53.482480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.359 qpair failed and we were unable to recover it. 00:41:03.359 [2024-07-11 02:46:53.482574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.482602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.482701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.482729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.482812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.482839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.482932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.482958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.483057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.483084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.483164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.483191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.483281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.483308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.483405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.483432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.483523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.483550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.483639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.483666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.483754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.483780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.483872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.483899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.483989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.484015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.484123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.484177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.484258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.484284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.484384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.484424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.484521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.484549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.484757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.484791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.484876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.484903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.485025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.485071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.485163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.485195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.485320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.485376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.485499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.485569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.485673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.485702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.485806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.485860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.485995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.486050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.486164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.486217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.486302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.486331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.486423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.486450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.486561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.486629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.486747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.486799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.486925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.486981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.487069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.487096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.487180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.487209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.487297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.487324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.487443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.487494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.487622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.487671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.487794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.487847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.487963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.488010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.488121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.488175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.488315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.488368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.488471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.488547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.488643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.488671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.488785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.488839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.488927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.488958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.489044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.489070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.489154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.489180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.489270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.489297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.489380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.489407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.489519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.489561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.489681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.489738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.489878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.489933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.490018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.490045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.490132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.490159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.490272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.490321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.490414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.490443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.490587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.490646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.490766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.490826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.490927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.490955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.491047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.491073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.491166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.491194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.491279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.491306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.360 [2024-07-11 02:46:53.491401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.360 [2024-07-11 02:46:53.491429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.360 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.491517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.491544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.491628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.491654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.491737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.491765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.491872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.491913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.492002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.492030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.492122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.492149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.492249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.492309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.492420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.492478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.492582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.492638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.492874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.492903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.492989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.493016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.493116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.493144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.493256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.493305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.493418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.493475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.493564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.493592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.493680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.493707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.493789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.493816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.493948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.494003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.494107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.494174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.494262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.494290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.494374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.494401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.494523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.494586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.494677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.494705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.494830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.494875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.494962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.494990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.495077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.495104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.495192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.495221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.495307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.495333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.495416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.495443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.495536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.495563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.495682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.495731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.495832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.495893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.495985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.496012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.496124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.496170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.496257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.496283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.496378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.496406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.496494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.496531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.496635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.496675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.496778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.496806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.496891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.496918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.497010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.497038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.497127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.497154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.497237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.497263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.497375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.497423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.497521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.497548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.497659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.497709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.497810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.497862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.497971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.498031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.498120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.498152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.498268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.498325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.498412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.498439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.498521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.498548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.498651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.498700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.498787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.498814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.499023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.499052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.499139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.499166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.499247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.361 [2024-07-11 02:46:53.499275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.361 qpair failed and we were unable to recover it. 00:41:03.361 [2024-07-11 02:46:53.499364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.499391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.499491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.499538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.499634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.499663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.499787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.499833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.499933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.499972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.500078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.500111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.500227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.500284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.500367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.500393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.500516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.500559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.500647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.500674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.500783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.500831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.500953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.501009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.501090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.501116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.501233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.501293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.501377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.501404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.501538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.501585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.501668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.501694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.501823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.501904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.502015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.502067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.502162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.502190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.502298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.502339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.502483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.502543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.502776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.502802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.502887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.502914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.503026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.503080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.503165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.503192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.503316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.503369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.503455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.503483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.503583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.503612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.503714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.503766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.503899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.503951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.504036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.504062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.504153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.504182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.504272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.504299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.504403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.504467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.504600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.504683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.504774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.504801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.504885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.504912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.505060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.505106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.505200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.505228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.505319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.505347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.505443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.505472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.505603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.505647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.505744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.505773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.505867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.505895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.506017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.506074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.506200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.506256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.506340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.506367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.506475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.506536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.506643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.506693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.506806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.506855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.506968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.362 [2024-07-11 02:46:53.507018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.362 qpair failed and we were unable to recover it. 00:41:03.362 [2024-07-11 02:46:53.507126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.507183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.507388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.507415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.507507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.507539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.507640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.507691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.507776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.507803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.507896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.507924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.508030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.508076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.508173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.508202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.508292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.508319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.508443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.508494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.508614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.508661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.508758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.508785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.508870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.508897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.508978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.509005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.509097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.509126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.509210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.509237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.509320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.509346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.509439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.509466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.509583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.509629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.509725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.509756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.509898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.509947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.510041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.510069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.510182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.510238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.510365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.510421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.510515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.510543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.510774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.510801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.510888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.510914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.511023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.511085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.511170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.511197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.511300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.511351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.511439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.511469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.511583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.511612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.511697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.511723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.511845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.511892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.511975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.512001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.512089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.512116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.512217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.512244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.512331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.512358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.512460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.512500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.512611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.512640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.512729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.512755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.512838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.512864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.512946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.512972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.513056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.513083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.513174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.513201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.513286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.513316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.513412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.513445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.513540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.513569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.513699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.513743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.513826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.513853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.513954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.513995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.514104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.514133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.514222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.514250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.514335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.514365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.514454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.514481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.514592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.514657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.514777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.514824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.514932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.514979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.515068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.515094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.515177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.515205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.515296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.515323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.515408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.515435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.515530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.515557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.515673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.515730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.515819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.515845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.515928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.515955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.363 [2024-07-11 02:46:53.516045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.363 [2024-07-11 02:46:53.516071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.363 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.516153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.516179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.516264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.516293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.516400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.516440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.516546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.516575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.516810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.516837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.516957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.517013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.517097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.517128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.517211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.517238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.517354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.517402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.517491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.517527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.517614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.517642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.517751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.517799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.517894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.517921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.518005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.518033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.518116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.518142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.518231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.518259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.518342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.518369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.518456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.518483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.518573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.518602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.518769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.518796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.518908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.518968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.519057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.519084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.519189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.519240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.519335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.519387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.519476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.519503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.519628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.519677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.519818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.519884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.519999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.520059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.520174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.520227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.520317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.520343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.520454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.520507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.520641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.520694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.520790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.520819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.520905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.520934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.521020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.521047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.521172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.521223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.521304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.521330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.521439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.521499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.521658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.521686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.521772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.521801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.521891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.521919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.522008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.522036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.522150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.522212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.522294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.522321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.522467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.522494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.522588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.522615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.522707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.522733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.522820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.522848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.522986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.523041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.523124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.523150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.523261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.523307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.523392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.523419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.523519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.523546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.523633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.523659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.523746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.523772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.523858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.523884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.523966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.523992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.524194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.524223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.524339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.524386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.524471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.524499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.524598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.524624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.524708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.524734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.524850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.524897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.525035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.525088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.364 qpair failed and we were unable to recover it. 00:41:03.364 [2024-07-11 02:46:53.525175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.364 [2024-07-11 02:46:53.525203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.525435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.525463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.525579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.525627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.525738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.525786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.525871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.525897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.526020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.526078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.526175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.526216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.526329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.526387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.526477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.526505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.526597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.526629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.526718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.526745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.526829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.526858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.526946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.526974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.527086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.527148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.527233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.527260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.527363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.527402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.527544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.527590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.527691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.527742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.527831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.527858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.527966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.528027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.528133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.528179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.528261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.528288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.528384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.528412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.528516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.528546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.528638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.528666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.528749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.528776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.528871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.528898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.528993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.529022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.529105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.529133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.529229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.529256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.529346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.529374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.529480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.529506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.529599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.529625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.529746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.529813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.529917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.529943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.530025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.530050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.530186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.530243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.530352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.530399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.530485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.530517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.530601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.530627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.530743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.530800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.530898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.530966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.531082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.531142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.531258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.531306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.531541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.531569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.531655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.531682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.531765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.531792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.531905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.531954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.532064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.532110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.532197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.532225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.532318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.532345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.532432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.532459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.532569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.532607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.532703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.532730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.532814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.532840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.532933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.532959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.533080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.533130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.533257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.533312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.533422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.533468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.533563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.533590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.365 [2024-07-11 02:46:53.533690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.365 [2024-07-11 02:46:53.533739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.365 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.533880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.533938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.534036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.534081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.534162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.534192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.534306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.534352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.534442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.534469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.534601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.534630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.534770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.534833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.534916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.534942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.535084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.535138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.535234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.535272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.535393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.535438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.535525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.535553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.535678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.535705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.535816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.535854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.535952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.535978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.536066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.536092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.536205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.536254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.536343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.536370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.536467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.536508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.536654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.536706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.536829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.536888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.536978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.537005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.537111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.537158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.537254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.537281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.537397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.537441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.537616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.537643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.537826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.537878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.538029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.538056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.538161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.538211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.538299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.538329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.538449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.538503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.538605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.538633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.538836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.538863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.538952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.538979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.539065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.539091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.539182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.539210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.539300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.539329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.539412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.539439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.539534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.539561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.539660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.539709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.539794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.539820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.539910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.539937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.540136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.540163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.540261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.540289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.540371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.540398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.540480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.540507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.540604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.540631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.540727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.540753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.540839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.540865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.540966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.541006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.541103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.541131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.541221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.541249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.541336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.541362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.541448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.541474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.541565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.541592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.541697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.541752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.541855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.541918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.542002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.542029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.542118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.542147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.542244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.542274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.542360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.542387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.542474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.542500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.542627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.542654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.542745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.542772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.542861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.542887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.542992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.543039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.543120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.366 [2024-07-11 02:46:53.543146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.366 qpair failed and we were unable to recover it. 00:41:03.366 [2024-07-11 02:46:53.543288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.543354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.543453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.543483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.543604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.543666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.543753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.543779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.543864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.543890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.544016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.544073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.544185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.544232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.544312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.544338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.544475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.544523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.544641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.544687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.544776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.544802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.544896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.544923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.545072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.545110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.545209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.545236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.545341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.545402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.545560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.545588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.545687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.545714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.545798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.545825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.545938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.545978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.546061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.546087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.546172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.546199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.546313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2274170 is same with the state(5) to be set 00:41:03.367 [2024-07-11 02:46:53.546435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.546476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.546723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.546753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.546842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.546869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.546977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.547024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.547144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.547198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.547289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.547321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.547437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.547483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.547572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.547599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.547716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.547781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.547886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.547937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.548050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.548097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.548183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.548211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.548293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.548320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.548401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.548427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.548521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.548550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.548667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.548714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.548797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.548824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.548929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.548986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.549121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.549174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.549296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.549354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.549440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.549466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.549559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.549588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.549720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.549775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.549885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.549931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.550166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.550193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.550280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.550306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.550415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.550466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.550599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.550647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.550742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.550770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.550850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.550876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.550970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.550997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.551097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.551134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.551244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.551284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.367 [2024-07-11 02:46:53.551395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.367 [2024-07-11 02:46:53.551446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.367 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.551571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.551634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.551756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.551810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.551895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.551922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.552013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.552040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.552129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.552155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.552278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.552323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.552409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.552435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.552704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.552731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.552813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.552839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.552940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.552977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.553076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.553102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.553209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.553255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.553484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.553515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.553620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.553658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.553760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.553786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.553897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.553942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.554031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.554057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.554170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.554230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.554351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.554405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.554567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.554596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.554681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.554707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.554831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.554891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.555049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.555100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.555238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.555301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.555413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.555464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.555642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.555697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.555785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.555811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.555904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.555935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.556048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.556074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.556180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.556229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.556320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.556346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.556432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.556459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.556555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.556582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.556701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.556751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.556832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.556858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.556964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.557010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.557126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.557184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.557281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.557310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.557420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.557481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.557602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.557645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.557755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.557806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.557915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.557979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.558069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.558095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.558184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.558211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.558444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.558473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.558574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.558601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.558711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.558758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.558863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.558912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.559022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.559068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.559185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.559232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.559317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.559343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.559434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.559461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.559545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.559572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.559690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.559749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.559849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.559890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.559986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.560016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.560105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.560133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.560221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.560248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.560342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.560370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.560488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.560540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.560645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.560684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.560808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.560854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.560944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.560970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.561058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.561086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.561171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.561198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.561290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.561318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.561414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.561443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.368 qpair failed and we were unable to recover it. 00:41:03.368 [2024-07-11 02:46:53.561546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.368 [2024-07-11 02:46:53.561579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.561670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.561697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.561777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.561803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.561906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.561954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.562058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.562114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.562200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.562226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.562307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.562334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.562423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.562451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.562540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.562567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.562768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.562794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.562903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.562964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.563059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.563085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.563172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.563199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.563288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.563315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.563403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.563429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.563543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.563589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.563692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.563740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.563846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.563893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.563979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.564005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.564112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.564158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.564239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.564266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.564355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.564384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.564501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.564571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.564808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.564838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.564950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.564999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.565101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.565150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.565266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.565319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.565452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.565520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.565612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.565639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.565739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.565767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.565853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.565879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.566075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.566102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.566187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.566213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.566294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.566321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.566411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.566439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.566540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.566567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.566659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.566686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.566772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.566799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.566889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.566915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.567002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.567028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.567137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.567183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.567277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.567303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.567405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.567438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.567526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.567556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.567644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.567670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.567763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.567791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.567880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.567906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.567988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.568015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.568102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.568129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.568223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.568251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.568347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.568377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.568478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.568506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.568623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.568684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.568796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.568851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.568938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.568967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.569084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.569129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.569244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.569298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.569386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.569414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.569497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.569531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.569621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.569647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.569742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.569770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.569857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.569883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.569969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.569996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.570084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.570111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.570209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.570239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.570328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.369 [2024-07-11 02:46:53.570357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.369 qpair failed and we were unable to recover it. 00:41:03.369 [2024-07-11 02:46:53.570450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.570479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.570579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.570614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.570709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.570737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.570822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.570849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.570940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.570967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.571062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.571090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.571179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.571206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.571291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.571320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.571428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.571474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.571566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.571593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.571712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.571765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.571876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.571930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.572011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.572037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.572142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.572203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.572312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.572360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.572481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.572564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.572655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.572684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.572778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.572806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.572900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.572927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.573018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.573046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.573144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.573171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.573261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.573287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.573402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.573430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.573526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.573553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.573634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.573661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.573742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.573768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.573861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.573888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.573975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.574003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.574204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.574233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.574318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.574346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.574435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.574461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.574552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.574580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.574673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.574700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.574791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.574819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.574901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.574927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.575018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.575049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.575147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.575176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.575284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.575312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.575399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.575426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.575517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.575544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.575638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.575673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.575805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.575850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.575964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.576025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.576137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.576190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.576276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.576304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.576384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.576410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.576519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.576566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.576678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.576734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.576857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.576914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.577029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.577074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.577158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.577185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.577275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.577303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.577393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.577421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.577518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.577545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.577629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.577657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.577749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.577777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.577903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.577948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.578059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.578119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.578234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.578280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.370 qpair failed and we were unable to recover it. 00:41:03.370 [2024-07-11 02:46:53.578378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.370 [2024-07-11 02:46:53.578414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.578522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.578551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.578651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.578679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.578797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.578849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.578951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.579012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.579093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.579120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.579198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.579225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.579324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.579352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.579453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.579494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.579610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.579654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.579763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.579826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.580063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.580090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.580183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.580211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.580305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.580333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.580417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.580443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.580530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.580558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.580641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.580667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.580752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.580778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.580878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.580918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.581050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.581103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.581190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.581217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.581304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.581331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.581416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.581444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.581538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.581566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.581652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.581680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.581768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.581799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.581888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.581915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.582021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.582081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.582171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.582198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.582281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.582311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.582394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.582422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.582524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.582551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.582669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.582721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.582801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.582827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.582911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.582937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.583050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.583111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.583231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.583284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.583376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.583404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.583492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.583526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.583642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.583686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.583774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.583800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.583886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.583912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.584000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.584026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.584156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.584201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.584304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.584340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.584434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.584461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.584599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.584654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.584779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.584832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.584920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.584947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.585035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.585061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.585156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.585185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.585286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.585315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.585402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.585429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.585632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.585661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.585880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.585922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.586009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.586037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.586120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.586146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.586230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.586257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.586338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.586365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.586464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.586493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.586633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.586679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.586769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.586796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.586900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.586949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.587044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.587072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.587192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.587251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.587365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.587411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.587505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.587538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.587617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.587644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.587764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.587818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.587926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.587975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.371 qpair failed and we were unable to recover it. 00:41:03.371 [2024-07-11 02:46:53.588097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.371 [2024-07-11 02:46:53.588152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.588235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.588262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.588350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.588377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.588465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.588491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.588593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.588622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.588711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.588738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.588828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.588856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.588966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.589025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.589114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.589142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.589232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.589258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.589349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.589375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.589460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.589487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.589598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.589645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.589760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.589806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.589895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.589921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.590005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.590034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.590144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.590190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.590282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.590309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.590404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.590432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.590522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.590550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.590667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.590731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.590836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.590872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.590995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.591041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.591154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.591216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.591321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.591388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.591489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.591543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.591660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.591706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.591819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.591864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.591949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.591976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.592088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.592140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.592226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.592252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.592362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.592408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.592526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.592581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.592682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.592734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.592833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.592897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.593006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.593054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.593163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.593211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.593326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.593372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.593582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.593609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.593697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.593725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.593836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.593895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.594092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.594118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.594206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.594232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.594434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.594464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.594562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.594590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.594688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.594723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.594852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.594896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.594989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.595017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.595122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.595190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.595288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.595315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.595427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.595485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.595601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.595650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.595731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.595758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.595885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.595940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.596046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.596084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.596211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.596256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.596344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.596372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.596485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.596536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.596628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.596655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.596763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.596808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.596890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.596922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.597048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.597102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.597221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.597265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.597375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.597439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.597531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.597559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.597680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.597734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.597838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.597885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.598011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.598065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.598170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.598238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.598349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.598394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.598524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.598581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.372 [2024-07-11 02:46:53.598693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.372 [2024-07-11 02:46:53.598740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.372 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.598844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.598890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.598999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.599053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.599143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.599169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.599260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.599286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.599373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.599400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.599488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.599523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.599616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.599643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.599727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.599754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.599836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.599862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.599960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.600001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.600206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.600234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.600323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.600349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.600431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.600458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.600551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.600578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.600684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.600731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.600820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.600847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.600962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.601022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.601105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.601131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.601224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.601252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.601338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.601365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.601461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.601491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.601589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.601617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.601733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.601788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.601874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.601901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.601993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.602021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.602110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.602139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.602223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.602250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.602342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.602371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.602463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.602495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.602589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.602617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.602729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.602786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.602871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.602899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.602991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.603018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.603137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.603190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.603313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.603368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.603453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.603481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.603612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.603668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.603777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.603839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.603966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.604018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.604121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.604168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.604275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.604336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.604437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.604485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.604730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.604757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.604842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.604869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.604950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.604976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.605088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.605149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.605263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.605308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.605408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.605456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.605548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.605576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.605660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.605687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.605769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.605795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.605880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.605906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.605998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.606027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.606138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.606187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.606292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.606355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.606443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.606471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.606564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.606591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.606675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.606702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.606805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.606867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.606950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.606976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.607090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.607148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.607241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.607270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.607353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.607382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.607465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.607491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.607589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.607616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.607704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.607731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.607810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.607836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.607920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.607946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.608032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.608059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.608148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.608177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.373 [2024-07-11 02:46:53.608275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.373 [2024-07-11 02:46:53.608305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.373 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.608388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.608416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.608647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.608674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.608761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.608788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.608904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.608958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.609053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.609081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.609165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.609193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.609282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.609310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.609401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.609428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.609523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.609550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.609634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.609660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.609755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.609783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.609874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.609901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.609989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.610016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.610107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.610135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.610221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.610249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.610336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.610362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.610460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.610507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.610625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.610689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.610795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.610864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.610957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.610985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.611092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.611148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.611239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.611266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.611383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.611436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.611526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.611554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.611641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.611672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.611765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.611792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.611874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.611901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.611988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.612015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.612108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.612137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.612227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.612255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.612339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.612366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.612454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.612481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.612571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.612598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.612683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.612710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.612803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.612829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.612917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.612943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.613023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.613049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.613135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.613160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.613250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.613276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.613361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.613387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.613477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.613503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.613610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.613638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.613740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.613767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.613863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.613890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.613974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.614000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.614137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.614164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.614250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.614279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.614360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.614387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.614479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.614507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.614622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.614684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.614799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.614852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.614951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.614991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.615076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.615104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.615195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.615225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.615332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.615394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.615501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.615567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.615653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.615680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.615764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.615790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.615875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.615902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.615991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.616017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.616110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.616137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.616223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.616253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.616345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.616374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.616461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.616487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.616588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.616616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.616712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.616739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.616831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.616857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.616946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.616974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.617081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.617126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.617214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.617240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.617325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.617351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.617430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.617457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.617542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.617569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.617651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.617679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.374 [2024-07-11 02:46:53.617765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.374 [2024-07-11 02:46:53.617792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.374 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.617885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.617912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.618004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.618031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.618126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.618156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.618249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.618279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.618369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.618395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.618477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.618503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.618604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.618631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.618717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.618743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.618833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.618863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.618945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.618971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.619057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.619084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.619172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.619200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.619282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.619311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.619403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.619431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.619523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.619552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.619642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.619669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.619756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.619782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.619896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.619955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.620038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.620066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.620158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.620185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.620280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.620309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.620391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.620418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.620545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.620601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.620689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.620716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.620805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.620833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.620923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.620951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.621093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.621135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.621242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.621307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.621393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.621420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.621529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.621575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.621704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.621757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.621841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.621868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.621952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.621978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.622059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.622087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.622168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.622195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.622276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.622302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.622396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.622424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.622627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.622656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.622748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.622774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.622884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.622929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.623029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.623064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.623185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.623231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.623317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.623343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.623541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.623575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.623686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.623748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.623837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.623864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.623949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.623976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.624062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.624089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.624176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.624204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.624297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.624323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.624404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.624431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.624523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.624550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.624645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.624673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.624760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.624786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.624870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.624896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.625021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.625080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.625188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.625239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.625336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.625365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.625445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.625472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.625607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.625659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.625763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.625811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.625920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.625976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.626075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.626105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.626205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.626232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.626340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.626383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.626472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.626498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.626609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.626638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.626722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.626749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.375 [2024-07-11 02:46:53.626841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.375 [2024-07-11 02:46:53.626870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.375 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.626962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.626988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.627091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.627119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.627291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.627331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.627438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.627482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.627575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.627602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.627715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.627762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.627861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.627909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.627995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.628021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.628134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.628179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.628274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.628303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.628418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.628466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.628587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.628634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.628740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.628787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.628906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.628957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.629068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.629128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.629247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.629305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.629423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.629468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.629586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.629631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.629723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.629751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.629842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.629870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.629969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.629996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.630083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.630111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.630221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.630283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.630373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.630401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.630490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.630523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.630619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.630647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.630739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.630766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.630855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.630883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.630978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.631005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.631090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.631117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.631204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.631231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.631325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.631353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.631452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.631484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.631600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.631629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.631728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.631769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.631875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.631938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.632027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.632055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.632146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.632173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.632258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.632285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.632372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.632398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.632486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.632520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.632605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.632636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.632746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.632791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.632882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.632908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.633016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.633044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.633200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.633226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.633337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.633381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.633462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.633489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.633611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.633655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.633735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.633762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.633872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.633921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.634009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.634036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.634126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.634154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.634271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.634318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.634404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.634430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.634528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.634555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.634641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.634668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.634762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.634789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.634887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.634915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.635012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.635048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.635150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.635176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.635261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.635288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.635377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.635406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.635487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.635521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.635636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.635691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.635800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.635842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.635932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.635959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.636080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.636134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.636242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.636294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.636394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.636428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.636538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.636566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.636653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.636679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.636773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.636800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.636886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.636913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.636997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.637027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.376 [2024-07-11 02:46:53.637115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.376 [2024-07-11 02:46:53.637142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.376 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.637230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.637257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.637371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.637415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.637524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.637571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.637663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.637690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.637796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.637841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.637929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.637957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.638052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.638081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.638192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.638229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.638381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.638440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.638573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.638630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.638740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.638776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.638927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.639000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.639093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.639121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.639258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.639305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.639423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.639458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.639573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.639601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.639690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.639716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.639869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.639912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.640012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.640046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.640186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.640239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.640326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.640352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.640438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.640464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.640564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.640599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.640721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.640765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.640876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.640921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.641035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.641080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.641186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.641236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.641326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.641353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.641434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.641461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.641551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.641579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.641685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.641732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.641845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.641899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.642009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.642057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.642156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.642203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.642340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.642389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.642548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.642585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.642695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.642721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.642808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.642834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.642941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.642984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.643095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.643139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.643224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.643250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.643336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.643362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.643460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.643494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.643609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.643635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.643745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.643798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.643903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.643948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.644060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.644103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.644188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.644215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.644324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.644368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.644470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.644504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.644608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.644634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.644722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.644749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.644857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.644900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.645004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.645049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.645158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.645191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.645309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.645354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.645447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.645477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.645591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.645636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.645758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.645831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.645951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.646018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.646110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.646137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.646225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.646253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.646344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.646370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.646459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.646486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.646584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.646614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.646710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.646738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.646842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.646869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.646976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.647021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.647106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.647132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.647241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.647302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.647413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.647458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.647549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.377 [2024-07-11 02:46:53.647576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.377 qpair failed and we were unable to recover it. 00:41:03.377 [2024-07-11 02:46:53.647687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.647732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.647830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.647858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.647965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.648011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.648112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.648158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.648266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.648310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.648423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.648467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.648581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.648628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.648737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.648783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.648896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.648954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.649064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.649108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.649213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.649258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.649360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.649406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.649492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.649523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.649623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.649657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.649751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.649781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.649887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.649932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.650033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.650066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.650189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.650232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.650339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.650383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.650486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.650529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.650655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.650705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.650820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.650879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.650982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.651017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.651119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.651146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.651258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.651304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.651410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.651457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.651569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.651616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.651737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.651764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.651887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.651969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.652065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.652094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.652202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.652248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.652364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.652421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.652538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.652582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.652705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.652760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.652876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.652918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.653003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.653030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.653129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.653163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.653298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.653373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.653482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.653545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.653672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.653722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.653833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.653877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.653984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.654050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.654155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.654201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.654293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.654320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.654405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.654431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.654521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.654548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.654637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.654663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.654758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.654788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.654873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.654900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.654992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.655020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.655104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.655131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.655225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.655254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.655345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.655372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.655459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.655487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.655582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.655610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.655705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.655731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.655813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.655841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.655925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.655952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.656038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.656064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.656159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.656187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.656294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.656321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.656416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.656445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.656573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.656631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.656739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.656783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.656897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.656954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.657041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.657068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.657155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.657181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.657293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.657339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.657431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.657459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.657549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.657576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.657692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.657738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.657845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.657889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.657989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.658023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.378 qpair failed and we were unable to recover it. 00:41:03.378 [2024-07-11 02:46:53.658121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.378 [2024-07-11 02:46:53.658149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.658269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.658313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.658400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.658427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.658538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.658583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.658688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.658734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.658840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.658884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.658973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.659001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.659089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.659117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.659235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.659311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.659436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.659484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.659576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.659604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.659728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.659787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.659946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.659999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.660111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.660155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.660257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.660303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.660434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.660485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.660631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.660685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.660794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.660853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.660964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.661008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.661099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.661125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.661219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.661246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.661334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.661361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.661448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.661477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.661666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.661694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.661818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.661871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.661982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.662026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.662122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.662150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.662266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.662310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.662406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.662435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.662520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.662547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.662641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.662670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.662760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.662788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.662874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.662901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.663006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.663053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.663153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.663181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.663277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.663308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.663396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.663422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.663557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.663584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.663674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.663700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.663792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.663818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.663899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.663926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.664029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.664062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.664162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.664188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.664301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.664348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.664442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.664469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.664572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.664601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.664706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.664752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.664872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.664918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.665026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.665071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.665200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.665260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.665372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.665415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.665529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.665573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.665668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.665695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.665807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.665851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.665943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.665971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.666078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.666124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.666234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.666279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.666362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.666389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.666477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.666504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.666617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.666644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.666816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.666844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.666961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.667025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.667133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.667183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.667275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.667303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.667399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.667426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.667523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.667551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.667711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.667739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.667832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.667859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.667965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.668010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.668105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.668134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.668230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.668258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.668400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.668440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.379 [2024-07-11 02:46:53.668535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.379 [2024-07-11 02:46:53.668563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.379 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.668669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.668714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.668837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.668895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.668974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.669000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.669143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.669170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.669261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.669289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.669374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.669401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.669485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.669519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.669655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.669681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.669783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.669829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.669932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.669977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.670059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.670085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.670185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.670230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.670345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.670391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.670502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.670555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.670693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.670774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.670856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.670883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.670983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.671052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.671168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.671212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.671325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.671369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.671483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.671535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.671625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.671651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.671763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.671807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.671895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.671922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.672013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.672043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.672126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.672153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.672249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.672283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.672383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.672410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.672526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.672569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.672661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.672687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.672776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.672803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.672906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.672933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.673015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.673042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.673128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.673155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.673261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.673308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.673399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.673430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.673522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.673552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.673643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.673670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.673770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.673797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.673899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.673933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.674033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.674060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.674154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.674181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.674272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.674299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.674393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.674422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.674524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.674552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.674655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.674700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.674806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.674850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.674931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.674957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.675059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.675096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.675206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.675239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.675367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.675412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.675523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.675569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.675674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.675720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.675836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.675893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.676025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.676070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.676182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.676225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.676318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.676344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.676456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.676502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.676599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.676626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.676724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.676756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.676871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.676916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.677009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.677035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.677123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.677149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.677256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.677299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.677384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.677413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.677527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.677571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.677683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.677728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.677817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.677844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.677938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.677966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.678069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.678101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.678202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.678229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.678368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.678407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.678487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.678518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.678599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.678626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.380 qpair failed and we were unable to recover it. 00:41:03.380 [2024-07-11 02:46:53.678717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.380 [2024-07-11 02:46:53.678743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.678831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.678860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.678976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.679029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.679117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.679144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.679233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.679259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.679348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.679374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.679458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.679484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.679586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.679613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.679812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.679839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.679935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.679967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.680095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.680141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.680249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.680292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.680397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.680441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.680615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.680659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.680826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.680852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.680954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.680997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.681112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.681163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.681262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.681295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.681399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.681426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.681541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.681569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.681670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.681704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.681800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.681826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.681966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.681992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.682084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.682110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.682200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.682226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.682311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.682339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.682437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.682466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.682562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.682590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.682703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.682747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.682855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.682917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.683008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.683035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.683124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.683150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.683244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.683272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.683361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.683389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.683477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.683505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.683592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.683619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.683823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.683849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.683941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.683970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.684096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.684149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.684270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.684318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.684410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.684437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.684536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.684564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.684648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.684675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.684769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.684795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.684900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.684932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.685032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.685059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.685147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.685173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.685261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.685288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.685378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.685406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.685500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.685535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.685649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.685700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.685808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.685853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.685963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.686023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.686108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.686134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.686225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.686251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.686333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.686359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.686465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.686521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.686633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.686676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.686766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.686793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.686884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.686911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.686997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.687024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.687249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.687275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.687408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.687463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.687606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.687654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.687776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.687820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.687902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.687928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.688088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.688114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.688218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.688262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.688351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.688378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.688471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.688498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.688620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.688670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.688786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.688842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.688971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.689020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.689124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.689159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.689290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.689333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.689424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.689506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.689630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.689683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.381 [2024-07-11 02:46:53.689805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.381 [2024-07-11 02:46:53.689839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.381 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.689937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.689964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.690063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.690092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.690293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.690320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.690410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.690436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.690524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.690551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.690634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.690660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.690743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.690770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.690859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.690888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.690979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.691006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.691090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.691117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.691202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.691230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.691326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.691353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.691436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.691462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.691565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.691594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.691694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.691724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.691817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.691844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.691934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.691961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.692047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.692074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.692157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.692184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.692284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.692317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.692424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.692451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.692547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.692574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.692675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.692710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.692808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.692834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.692941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.692985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.693079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.693105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.693194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.693225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.693334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.693377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.693471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.693497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.693604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.693650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.693760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.693805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.693894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.693922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.694008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.694035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.694119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.694146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.694232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.694261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.694362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.694390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.694486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.694525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.694620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.694646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.694755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.694782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.694865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.694892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.694984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.695011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.695100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.695127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.695216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.695243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.695334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.695360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.695451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.695480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.695575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.695602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.695699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.695726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.695810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.695836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.695914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.695941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.696026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.696052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.696179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.696232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.696334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.696375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.696471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.696500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.696598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.696626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.696719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.696745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.696830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.696856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.696946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.696973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.697071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.697098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.697181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.697208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.697317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.697360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.697462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.697507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.697632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.697682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.697784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.697816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.697933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.697977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.698077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.698110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.698231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.698276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.698363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.698389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.698501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.698559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.698674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.698717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.382 [2024-07-11 02:46:53.698806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.382 [2024-07-11 02:46:53.698835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.382 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.698926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.698953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.699053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.699085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.699199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.699231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.699340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.699366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.699447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.699473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.699568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.699596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.699681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.699707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.699789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.699815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.699925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.699969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.700049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.700075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.700169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.700199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.700320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.700369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.700478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.700529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.700619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.700646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.700738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.700764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.700849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.700876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.700969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.700998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.701107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.701152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.701236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.701263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.701362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.701394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.701497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.701533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.701630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.701656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.701750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.701776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.701866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.701894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.701989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.702022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.702152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.702209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.702322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.702366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.702471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.702521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.702615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.702643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.702756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.702801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.702909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.702955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.703062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.703118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.703224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.703281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.703388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.703432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.703526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.703553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.703637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.703664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.703751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.703777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.703870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.703897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.703987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.704016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.704103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.704129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.704219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.704245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.704345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.704377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.704488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.704549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.704658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.704701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.704805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.704837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.704956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.705001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.705086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.705112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.705208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.705240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.705334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.705360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.705462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.705507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.705629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.705675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.705783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.705838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.705997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.706024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.706132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.706175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.706284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.706330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.706441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.706481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.706613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.706659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.706748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.706776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.706870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.706897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.706984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.707010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.707099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.707126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.707219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.707248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.707344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.707373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.707469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.707498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.707598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.707625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.707728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.707773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.707879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.707923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.708012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.708039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.708127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.708155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.708254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.708289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.708416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.708461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.708599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.708673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.708781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.708811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.708901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.708929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.709026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.709054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.709157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.709191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.709307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.383 [2024-07-11 02:46:53.709339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.383 qpair failed and we were unable to recover it. 00:41:03.383 [2024-07-11 02:46:53.709458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.709505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.709605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.709631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.709716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.709742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.709829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.709855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.709938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.709964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.710074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.710118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.710214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.710246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.710364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.710408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.710524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.710566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.710660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.710690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.710798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.710842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.710957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.711000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.711091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.711119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.711225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.711269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.711395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.711447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.711543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.711572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.711654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.711681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.711776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.711809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.711909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.711935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.712024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.712050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.712138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.712165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.712246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.712272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.712366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.712392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.712501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.712554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.712661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.712697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.712813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.712846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.712950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.712978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.713064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.713095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.713209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.713266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.713357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.713384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.713487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.713540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.713624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.713650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.713742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.713769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.713870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.713911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.714017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.714044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.714122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.714149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.714236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.714262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.714355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.714383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.714466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.714495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.714599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.714628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.714723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.714750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.714842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.714869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.714976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.715021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.715125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.715167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.715265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.715293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.715387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.715414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.715524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.715570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.715672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.715704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.715803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.715830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.715944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.715989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.716075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.716102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.716202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.716235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.716356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.716399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.716504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.716559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.716674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.716719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.716811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.716837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.716935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.716966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.717069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.717094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.717205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.717250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.717359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.717402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.717506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.717555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.717667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.717710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.717811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.384 [2024-07-11 02:46:53.717840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.384 qpair failed and we were unable to recover it. 00:41:03.384 [2024-07-11 02:46:53.717956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.717999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.718105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.718149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.718260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.718304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.718394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.718420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.718530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.718574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.718692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.718734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.718842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.718886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.719013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.719055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.719162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.719204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.719406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.719445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.719536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.719563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.719659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.719685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.719789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.719821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.719938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.719969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.720091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.720133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.720236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.720268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.720395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.720438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.720559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.720599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.720724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.720768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.720881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.720928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.721013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.721039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.721149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.721190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.721282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.721308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.721401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.721427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.721518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.721545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.721632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.721659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.721762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.721795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.721893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.721921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.722011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.722038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.722122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.722147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.722237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.722266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.722359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.722392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.722487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.722527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.722622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.722650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.722738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.722763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.722856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.722881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.722971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.723002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.723121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.723166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.723256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.723284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.723386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.723416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.723522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.723550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.723633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.723659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.723769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.723809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.723893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.723919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.724009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.724038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.724156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.724197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.724295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.724324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.724535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.724562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.724671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.724712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.724826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.724866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.724958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.724984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.725078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.725108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.725212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.725240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.725351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.725396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.725488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.725521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.725623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.725652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.725778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.725824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.725931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.725965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.726072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.726104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.726217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.726258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.726351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.726380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.726467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.726494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.385 [2024-07-11 02:46:53.726594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.385 [2024-07-11 02:46:53.726620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.385 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.726710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.726737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.726824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.726851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.726941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.726968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.727168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.727195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.727286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.727314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.727443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.727484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.727590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.727619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.727735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.727763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.727864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.727891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.727995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.728024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.728139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.728167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.728283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.728313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.728422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.728450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.728598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.728654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.728749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.728778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.728888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.728931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.729037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.729067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.729169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.729198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.729288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.729317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.729400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.729427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.729525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.729553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.729650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.729677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.729776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.729808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.729890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.729915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.730002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.730027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.730108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.730133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.730220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.730245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.730331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.730356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.730439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.730467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.730571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.730599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.730697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.730727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.730821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.730848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.730935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.730963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.731064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.731091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.731176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.731204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.731295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.731324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.731423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.731454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.731542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.731568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.731655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.731682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.731764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.731790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.731875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.731900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.731983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.732009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.732099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.732127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.732230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.732262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.732356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.732383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.732466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.732497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.732588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.732614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.732724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.732764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.732854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.732880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.732987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.733016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.733138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.733168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.733289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.733317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.733554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.733581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.733667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.733692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.733793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.733821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.733936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.733964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.734063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.734090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.734191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.734220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.734345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.734376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.734483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.734518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.734617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.734644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.734745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.734784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.734869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.734904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.734994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.735021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.735130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.735169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.735280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.735320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.735403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.386 [2024-07-11 02:46:53.735429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.386 qpair failed and we were unable to recover it. 00:41:03.386 [2024-07-11 02:46:53.735645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.735697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.735796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.735822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.735931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.735958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.736071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.736097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.736190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.736218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.736306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.736331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.736416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.736443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.736541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.736570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.736670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.736698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.736838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.736889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.736979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.737006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.737095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.737123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.737222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.737250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.737338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.737363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.737472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.737516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.737628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.737667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.737772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.737799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.737908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.737934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.738043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.738069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.738162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.738187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.738290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.738319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.738434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.738462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.738575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.738620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.738703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.738731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.738838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.738879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.738967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.738995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.739084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.739109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.739202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.739228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.739318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.739344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.739432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.739459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.739546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.739572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.739666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.739692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.739771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.739797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.739892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.739917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.740018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.740042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.740125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.740159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.740261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.740289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.740397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.740425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.740520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.740548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.740661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.740689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.740806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.740831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.740935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.740961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.741063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.741090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.741176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.741203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.741302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.741329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.741444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.741473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.741627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.741669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.741770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.741799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.741890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.741923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.742013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.742040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.742137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.742165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.742257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.742285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.742375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.742402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.742487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.742524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.742616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.742642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.742726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.742751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.742839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.742864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.742948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.742973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.387 qpair failed and we were unable to recover it. 00:41:03.387 [2024-07-11 02:46:53.743059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.387 [2024-07-11 02:46:53.743084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.743171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.743196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.743282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.743310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.743396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.743422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.743516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.743544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.743640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.743666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.743753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.743780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.743875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.743903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.743989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.744016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.744100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.744126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.744213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.744239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.744323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.744348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.744466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.744492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.744583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.744608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.744693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.744718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.744803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.744827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.744913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.744938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.745022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.745048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.745145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.745171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.745258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.745286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.745382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.745409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.745506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.745540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.745637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.745663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.745752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.745779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.745864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.745891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.745973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.745999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.746092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.746126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.746210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.746235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.746319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.746346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.746467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.746492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.746589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.746615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.746695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.746724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.746806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.746830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.746917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.746942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.747033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.747058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.747150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.747177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.747277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.747310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.747432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.747518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.747651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.747714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.747825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.747889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.747998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.748062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.748151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.748176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.748268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.748296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.748399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.748424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.748525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.748555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.748646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.748672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.748770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.748797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.748885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.748912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.749005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.749032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.749129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.749156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.749248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.749274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.749386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.749413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.749493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.749528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.749623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.749651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.749739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.749766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.749876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.749902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.749986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.750011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.750102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.750130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.750218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.750252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.750341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.750368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.750459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.750484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.750578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.750606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.750693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.750718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.750808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.750835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.750926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.750953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.751047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.388 [2024-07-11 02:46:53.751084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.388 qpair failed and we were unable to recover it. 00:41:03.388 [2024-07-11 02:46:53.751183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.751228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.751320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.751348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.751429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.751456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.751536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.751562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.751648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.751675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.751768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.751794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.751888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.751922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.752022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.752050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.752139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.752166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.752264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.752290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.752374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.752400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.752483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.752517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.752609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.752635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.752717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.752742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.752825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.752851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.752953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.752980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.753073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.753099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.753193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.753220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.753310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.753338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.753425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.753452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.753549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.753578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.753665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.753690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.753774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.753799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.753880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.753905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.753994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.754021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.754105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.754131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.754222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.754250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.754339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.754366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.754452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.754477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.754568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.754594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.754690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.754717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.754798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.754824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.754909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.754939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.755022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.755049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.755133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.755159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.755244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.755270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.755385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.755413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.755506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.755538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.755626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.755653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.755738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.755766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.755858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.755889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.755978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.756004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.756091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.756118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.756202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.756227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.756326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.756354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.756444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.756472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.756680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.756708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.756795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.756821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.756902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.756928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.757019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.757045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.757241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.757267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.757353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.757381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.757475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.757502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.757595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.757621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.757712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.757741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.757829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.757856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.757944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.757971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.758062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.758091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.758174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.758202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.758294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.758323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.758531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.758561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.758642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.758668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.389 [2024-07-11 02:46:53.758764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.389 [2024-07-11 02:46:53.758789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.389 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.758871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.758895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.758987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.759014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.759095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.759120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.759201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.759227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.759322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.759347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.759432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.759457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.759546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.759574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.759660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.759686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.759779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.759806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.759904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.759929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.760026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.760054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.760136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.760163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.760248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.760274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.760365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.760392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.760492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.760525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.760610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.760635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.760721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.760746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.760837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.760863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.760970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.760996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.761080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.761105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.761196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.761222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.761311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.761338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.761541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.761568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.761660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.761686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.761810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.761865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.761949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.761975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.762075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.762104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.762197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.762223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.762304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.762329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.762425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.762453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.762547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.762575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.762665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.762690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.762776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.762818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.762933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.762960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.763073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.763100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.763187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.763213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.763411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.763444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.763535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.763562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.763652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.763678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.763764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.763791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.763881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.763908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.763991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.764018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.764104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.764130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.764212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.764237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.764324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.764352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.764452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.764477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.764580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.764605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.764720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.764759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.764896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.764938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.390 [2024-07-11 02:46:53.765055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.390 [2024-07-11 02:46:53.765097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.390 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.765347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.765376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.765456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.765483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.765573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.765600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.765710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.765774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.765854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.765880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.765965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.765991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.766075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.766100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.766194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.766222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.766305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.766330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.766423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.766448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.766542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.766569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.766653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.766678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.766771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.766797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.766882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.766931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.767023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.767049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.767178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.767207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.767294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.767321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.767420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.767451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.767537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.767565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.767662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.767688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.767778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.767804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.767890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.767916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.768004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.768029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.768115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.768140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.768225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.768251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.768342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.768369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.768456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.768482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.768586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.673 [2024-07-11 02:46:53.768618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.673 qpair failed and we were unable to recover it. 00:41:03.673 [2024-07-11 02:46:53.768716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.768745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.768853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.768881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.768978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.769006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.769089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.769116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.769232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.769259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.769359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.769386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.769478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.769504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.769603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.769629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.769738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.769764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.769852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.769885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.769968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.769994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.770094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.770120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.770221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.770253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.770350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.770378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.770472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.770500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.770597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.770624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.770711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.770739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.770826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.770853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.770936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.770963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.771049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.771077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.771170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.771198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.771285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.771310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.771394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.771419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.771505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.771535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.771619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.771645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.771727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.771756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.771851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.771876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.771967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.771992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.772076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.772101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.772184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.772208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.772295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.772319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.772430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.772456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.772548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.772578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.772671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.772698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.772790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.772817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.772904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.772932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.773019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.773045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.773130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.773156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.773243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.674 [2024-07-11 02:46:53.773269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.674 qpair failed and we were unable to recover it. 00:41:03.674 [2024-07-11 02:46:53.773363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.773389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.773488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.773527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.773619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.773646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.773737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.773765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.773854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.773880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.773978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.774006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.774092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.774118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.774201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.774227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.774322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.774349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.774439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.774469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.774569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.774597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.774686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.774713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.774794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.774819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.774908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.774942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.775026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.775053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.775141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.775166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.775247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.775273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.775354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.775379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.775467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.775495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.775603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.775631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.775727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.775756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.775855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.775882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.775974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.776001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.776090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.776119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.776227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.776287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.776371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.776398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.776480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.776507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.776603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.776630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.776716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.776743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.776838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.776864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.776962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.776988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.777071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.777099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.777189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.777216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.777307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.777332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.777422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.777449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.777533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.777559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.777650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.777675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.675 qpair failed and we were unable to recover it. 00:41:03.675 [2024-07-11 02:46:53.777763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.675 [2024-07-11 02:46:53.777792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.777902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.777930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.778020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.778046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.778138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.778166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.778254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.778281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.778375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.778401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.778487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.778523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.778609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.778637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.778732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.778759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.778849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.778874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.778958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.778984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.779102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.779133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.779220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.779247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.779338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.779363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.779449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.779475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.779565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.779591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.779684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.779716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.779819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.779846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.779943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.779969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.780059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.780084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.780165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.780191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.780272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.780297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.780390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.780417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.780564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.780592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.780678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.780703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.780789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.780814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.780897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.780922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.781004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.781028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.781115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.781142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.781238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.781268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.781366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.781393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.781476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.781503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.781602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.781629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.781720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.781747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.781828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.781853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.781938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.781964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.782045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.782071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.782154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.782180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.676 [2024-07-11 02:46:53.782276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.676 [2024-07-11 02:46:53.782301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.676 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.782388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.782419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.782529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.782557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.782646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.782674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.782762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.782788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.782873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.782903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.782989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.783016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.783107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.783134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.783216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.783242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.783326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.783354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.783448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.783475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.783588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.783615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.783703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.783729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.783822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.783848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.783938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.783964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.784049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.784075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.784169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.784195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.784280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.784305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.784389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.784415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.784502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.784534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.784620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.784645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.784734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.784760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.784847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.784874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.784956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.784983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.785063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.785089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.785168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.785194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.785276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.785303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.785401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.785428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.785521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.785548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.785631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.785657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.677 qpair failed and we were unable to recover it. 00:41:03.677 [2024-07-11 02:46:53.785740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.677 [2024-07-11 02:46:53.785765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.785848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.785874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.785964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.785993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.786084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.786110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.786216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.786241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.786333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.786360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.786452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.786479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.786572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.786599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.786682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.786708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.786799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.786826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.786911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.786938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.787052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.787112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.787213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.787240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.787332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.787357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.787440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.787465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.787563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.787589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.787679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.787704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.787786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.787811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.787905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.787931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.788017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.788043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.788135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.788160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.788253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.788279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.788373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.788424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.788530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.788559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.788669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.788729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.788819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.788845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.788930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.788956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.789039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.789063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.789142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.789167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.789259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.789287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.789379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.789405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.789490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.789528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.789618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.789645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.789739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.789765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.789851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.789877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.789967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.789995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.790089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.790114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.790193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.790219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.678 qpair failed and we were unable to recover it. 00:41:03.678 [2024-07-11 02:46:53.790300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.678 [2024-07-11 02:46:53.790325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.790408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.790434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.790520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.790546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.790629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.790654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.790735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.790764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.790846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.790872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.790970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.790995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.791082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.791108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.791193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.791218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.791305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.791329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.791410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.791434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.791524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.791559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.791655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.791682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.791766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.791792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.791888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.791933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.792029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.792056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.792254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.792284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.792376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.792405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.792508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.792547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.792665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.792694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.792784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.792812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.792901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.792927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.793016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.793045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.793135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.793164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.793257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.793284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.793377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.793402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.793487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.793519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.793602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.793627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.793710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.793736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.793827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.793853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.793950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.793978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.794067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.794099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.794182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.794208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.794293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.794319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.794400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.794426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.794523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.794551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.794649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.794675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.794759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.679 [2024-07-11 02:46:53.794787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.679 qpair failed and we were unable to recover it. 00:41:03.679 [2024-07-11 02:46:53.794875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.794901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.795002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.795029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.795114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.795138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.795220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.795246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.795337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.795362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.795445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.795469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.795572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.795600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.795693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.795719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.795813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.795842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.795930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.795957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.796046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.796073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.796163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.796189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.796271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.796296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.796389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.796414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.796503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.796541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.796628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.796653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.796743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.796768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.796852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.796877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.796966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.796991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.797086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.797115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.797213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.797243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.797324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.797350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.797465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.797531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.797624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.797651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.797736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.797763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.797852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.797878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.797970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.797999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.798085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.798112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.798197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.798224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.798320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.798345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.798430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.798457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.798547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.798573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.798661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.798687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.798781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.798805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.798899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.798924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.799011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.799039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.799121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.799147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.680 [2024-07-11 02:46:53.799231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.680 [2024-07-11 02:46:53.799257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.680 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.799344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.799372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.799462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.799490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.799581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.799608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.799699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.799727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.799821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.799846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.799929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.799956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.800047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.800072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.800168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.800194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.800277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.800306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.800392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.800427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.800521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.800548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.800638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.800671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.800762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.800787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.800875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.800900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.800986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.801012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.801096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.801121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.801202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.801228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.801316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.801345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.801444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.801472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.801571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.801597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.801694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.801720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.801804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.801829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.801921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.801947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.802045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.802071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.802159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.802187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.802277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.802303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.802392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.802417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.802504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.802538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.802621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.802646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.802727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.802753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.802847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.802872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.802957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.802983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.803067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.803095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.803185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.803213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.803305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.803332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.803435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.803463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.803568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.803594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.803679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.803705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.803790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.681 [2024-07-11 02:46:53.803816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.681 qpair failed and we were unable to recover it. 00:41:03.681 [2024-07-11 02:46:53.803910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.803935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.804018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.804044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.804131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.804157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.804256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.804283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.804378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.804412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.804502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.804534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.804624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.804648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.804742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.804770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.804858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.804884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.804973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.805000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.805104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.805168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.805255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.805284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.805379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.805406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.805499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.805533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.805617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.805642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.805740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.805766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.805854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.805880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.805977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.806006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.806100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.806126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.806212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.806236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.806322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.806347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.806437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.806462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.806577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.806610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.806699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.806728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.806821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.806847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.806940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.806967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.807056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.807083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.807180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.807207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.807302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.807328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.807410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.807437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.807537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.807567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.807658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.807685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.682 qpair failed and we were unable to recover it. 00:41:03.682 [2024-07-11 02:46:53.807774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.682 [2024-07-11 02:46:53.807799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.807880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.807905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.807986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.808011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.808101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.808129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.808220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.808247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.808334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.808366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.808448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.808475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.808570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.808597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.808687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.808713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.808806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.808834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.808920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.808945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.809033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.809059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.809142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.809166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.809251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.809277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.809363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.809388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.809475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.809501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.809592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.809617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.809698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.809722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.809815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.809842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.809944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.809972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.810063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.810090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.810185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.810212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.810299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.810326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.810411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.810437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.810537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.810566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.810653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.810677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.810763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.810788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.810872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.810896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.810986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.811014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.811105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.811131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.811216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.811243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.811336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.811363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.811454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.811491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.811593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.811619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.811712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.811737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.811822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.811849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.811941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.811970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.812069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.812096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.812187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.812213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.812293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.812318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.683 [2024-07-11 02:46:53.812416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.683 [2024-07-11 02:46:53.812445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.683 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.812536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.812563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.812644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.812668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.812756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.812782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.812865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.812890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.812973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.812998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.813086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.813111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.813197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.813222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.813308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.813333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.813421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.813449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.813549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.813577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.813664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.813693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.813784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.813823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.813913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.813940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.814032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.814059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.814151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.814178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.814269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.814293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.814378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.814403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.814494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.814526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.814634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.814663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.814762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.814789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.814878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.814905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.815016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.815044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.815129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.815155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.815245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.815274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.815365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.815393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.815473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.815499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.815596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.815622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.815703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.815728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.815815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.815841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.815928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.815952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.816042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.816069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.816159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.816184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.816271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.816297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.816381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.816406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.816499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.816533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.816619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.816644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.816779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.816807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.816893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.816917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.684 [2024-07-11 02:46:53.817001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.684 [2024-07-11 02:46:53.817027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.684 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.817110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.817135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.817220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.817247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.817336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.817365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.817488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.817560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.817653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.817680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.817769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.817795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.817892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.817919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.818007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.818037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.818126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.818151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.818247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.818274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.818358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.818383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.818463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.818489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.818581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.818607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.818701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.818727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.818812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.818839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.818926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.818954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.819039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.819066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.819148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.819175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.819262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.819290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.819380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.819412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.819502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.819534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.819632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.819657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.819742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.819767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.819850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.819874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.819962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.819986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.820089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.820114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.820201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.820226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.820310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.820334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.820426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.820455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.820569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.820597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.820690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.820719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.820801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.820828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.820920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.820948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.821043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.821070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.821155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.821182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.821278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.821302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.821394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.821419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.821499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.821530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.685 [2024-07-11 02:46:53.821631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.685 [2024-07-11 02:46:53.821656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.685 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.821741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.821766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.821849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.821874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.821983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.822007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.822093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.822118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.822206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.822231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.822309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.822333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.822418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.822443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.822526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.822557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.822645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.822671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.822754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.822779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.822860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.822886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.822980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.823016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.823123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.823151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.823241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.823269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.823363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.823389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.823481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.823507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.823606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.823632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.823718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.823744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.823830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.823856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.823942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.823967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.824061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.824088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.824177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.824202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.824285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.824311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.824400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.824426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.824506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.824538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.824620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.824646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.824741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.824770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.824857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.824885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.824969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.824994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.825080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.825106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.825221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.825248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.825348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.825376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.825468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.825495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.825589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.825613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.686 [2024-07-11 02:46:53.825698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.686 [2024-07-11 02:46:53.825727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.686 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.825817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.825842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.825940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.825967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.826055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.826082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.826169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.826195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.826287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.826315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.826401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.826428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.826519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.826545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.826643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.826670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.826753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.826778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.826863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.826889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.826972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.826997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.827079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.827103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.827185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.827210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.827309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.827336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.827429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.827458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.827551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.827579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.827675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.827703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.827788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.827813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.827916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.827944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.828031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.828058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.828142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.828169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.828258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.828284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.828372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.828398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.828492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.828525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.828609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.828634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.828721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.828747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.828836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.828863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.828951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.828975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.829055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.829081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.829163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.829188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.829277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.829302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.829389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.829415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.829504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.829534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.829623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.829647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.829731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.829756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.829854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.829879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.829960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.829984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.830068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.830092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.830182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.830209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.830296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.687 [2024-07-11 02:46:53.830323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.687 qpair failed and we were unable to recover it. 00:41:03.687 [2024-07-11 02:46:53.830415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.830441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.830532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.830559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.830653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.830678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.830768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.830795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.830881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.830907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.830990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.831016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.831106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.831135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.831218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.831250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.831341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.831368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.831456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.831483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.831581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.831609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.831693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.831717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.831800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.831827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.831930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.831956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.832040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.832066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.832162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.832187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.832271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.832297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.832384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.832410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.832492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.832525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.832612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.832637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.832727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.832754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.832841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.832868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.832956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.832982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.833067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.833093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.833178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.833205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.833296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.833324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.833415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.833445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.833549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.833576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.833662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.833688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.833781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.833806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.833891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.833919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.834011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.834036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.834134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.834160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.834241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.834268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.834359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.834384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.834463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.834488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.834577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.834604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.834696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.834722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.834818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.834845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.834948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.688 [2024-07-11 02:46:53.834974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.688 qpair failed and we were unable to recover it. 00:41:03.688 [2024-07-11 02:46:53.835063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.835090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.835182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.835207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.835293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.835319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.835400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.835426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.835514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.835540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.835630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.835655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.835739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.835763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.835848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.835872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.835964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.835991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.836096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.836122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.836206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.836231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.836331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.836358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.836453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.836481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.836577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.836607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.836698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.836725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.836830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.836857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.836948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.836975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.837065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.837091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.837176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.837205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.837307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.837334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.837437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.837465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.837576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.837605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.837691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.837716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.837810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.837836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.837920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.837946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.838048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.838076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.838162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.838188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.838282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.838307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.838390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.838414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.838507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.838546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.838631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.838655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.838741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.838767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.838848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.838872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.838976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.839000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.839079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.839104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.839182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.839206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.839291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.839316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.839395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.839420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.839505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.839537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.839622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.839646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.689 [2024-07-11 02:46:53.839729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.689 [2024-07-11 02:46:53.839757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.689 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.839860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.839886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.839971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.839999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.840081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.840107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.840198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.840223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.840305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.840331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.840465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.840491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.840584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.840609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.840695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.840721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.840826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.840850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.840940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.840964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.841045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.841073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.841166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.841194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.841282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.841323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.841416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.841442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.841524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.841550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.841645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.841671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.841759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.841786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.841874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.841901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.841993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.842019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.842110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.842139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.842226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.842253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.842339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.842366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.842457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.842483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.842584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.842616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.842705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.842730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.842818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.842845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.842951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.842980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.843078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.843104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.843190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.843217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.843310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.843336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.843426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.843455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.843587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.843627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.843728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.843755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.843845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.843870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.843955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.843982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.844066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.844092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.844176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.844205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.844298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.844325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.844412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.844438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.844527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.690 [2024-07-11 02:46:53.844558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.690 qpair failed and we were unable to recover it. 00:41:03.690 [2024-07-11 02:46:53.844685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.844711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.844810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.844837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.844926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.844952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.845052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.845080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.845171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.845198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.845286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.845312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.845413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.845442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.845554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.845595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.845692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.845721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.845814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.845842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.845929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.845956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.846042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.846069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.846154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.846181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.846278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.846304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.846386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.846415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.846506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.846539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.846633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.846659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.846742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.846768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.846862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.846889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.846975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.847003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.847097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.847126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.847216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.847242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.847337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.847365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.847459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.847485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.847581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.847607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.847697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.847722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.847828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.847855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.691 [2024-07-11 02:46:53.847944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.691 [2024-07-11 02:46:53.847969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.691 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.848065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.848090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.848181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.848213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.848305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.848330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.848415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.848440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.848557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.848585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.848678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.848704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.848795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.848821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.848907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.848934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.849037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.849068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.849169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.849197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.849285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.849311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.849400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.849426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.849521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.849548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.849638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.849670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.849756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.849784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.849873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.849901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.849996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.850025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.850136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.850201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.850293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.850318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.850404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.850430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.850523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.850549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.850680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.850720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.850811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.850836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.850929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.850955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.851044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.851069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.851166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.851199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.851330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.851360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.851458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.851485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.851582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.851609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.851743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.851773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.851871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.851896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.851988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.852014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.852101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.852127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.852213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.852239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.852334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.852362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.852454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.852482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.852588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.852615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.852704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.852731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.692 [2024-07-11 02:46:53.852817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.692 [2024-07-11 02:46:53.852844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.692 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.852943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.852970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.853057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.853084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.853167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.853193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.853287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.853314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.853405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.853431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.853530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.853557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.853646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.853672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.853767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.853792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.853885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.853910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.853991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.854016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.854101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.854127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.854215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.854241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.854324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.854350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.854466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.854495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.854599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.854629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.854749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.854810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.854898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.854925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.855009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.855036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.855123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.855151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.855241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.855269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.855368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.855396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.855489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.855524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.855617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.855646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.855729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.855755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.855848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.855874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.856002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.856027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.856114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.856141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.856248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.856311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.856407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.856434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.856521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.856551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.856645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.856672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.856762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.856790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.856883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.856909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.856993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.857019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.857107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.857134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.857216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.857250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.693 [2024-07-11 02:46:53.857373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.693 [2024-07-11 02:46:53.857399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.693 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.857496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.857532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.857619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.857644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.857731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.857757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.857846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.857872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.857958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.857987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.858075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.858102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.858202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.858231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.858325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.858352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.858441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.858467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.858563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.858590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.858682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.858709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.858816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.858843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.858925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.858951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.859045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.859074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.859176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.859225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.859433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.859462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.859574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.859612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.859734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.859777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.859904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.859963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.860055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.860084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.860189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.860217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.860311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.860341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.860447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.860475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.860580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.860607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.860695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.860722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.860815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.860842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.860936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.860964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.861055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.861084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.861172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.861199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.861286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.861313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.861403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.861429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.861533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.861561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.861647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.861673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.861758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.861783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.861864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.694 [2024-07-11 02:46:53.861889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.694 qpair failed and we were unable to recover it. 00:41:03.694 [2024-07-11 02:46:53.861992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.862021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.862115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.862144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.862227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.862253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.862342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.862369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.862463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.862491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.862625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.862656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.862773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.862802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.862892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.862918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.863008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.863039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.863127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.863161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.863262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.863307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.863397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.863424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.863546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.863575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.863666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.863693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.863781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.863807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.863904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.863931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.864023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.864052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.864145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.864174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.864266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.864294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.864384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.864411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.864498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.864531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.864631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.864658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.864758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.864787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.864888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.864919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.865032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.865064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.865166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.865192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.865276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.865305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.865413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.865445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.865569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.865598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.865690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.865717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.865826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.865852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.865944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.865971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.866060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.866087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.866169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.866196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.866281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.866306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.866404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.866433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.866522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.695 [2024-07-11 02:46:53.866549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.695 qpair failed and we were unable to recover it. 00:41:03.695 [2024-07-11 02:46:53.866639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.866665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.866751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.866778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.866869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.866895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.866985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.867012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.867097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.867123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.867207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.867233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.867320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.867346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.867436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.867462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.867560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.867587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.867675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.867703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.867787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.867814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.867900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.867931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.868025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.868051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.868146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.868175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.868272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.868299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.868380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.868406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.868490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.868521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.868612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.868638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.868725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.868751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.868835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.868862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.868946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.868974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.869067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.869097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.869188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.869216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.869308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.869334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.869425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.869452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.869565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.869594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.869692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.869719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.869804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.869831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.869920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.869947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.870033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.870059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.870146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.870172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.870258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.870284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.870367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.870393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.870505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.696 [2024-07-11 02:46:53.870541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.696 qpair failed and we were unable to recover it. 00:41:03.696 [2024-07-11 02:46:53.870631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.870659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.870749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.870775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.870863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.870889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.870974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.871000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.871091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.871123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.871270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.871300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.871386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.871412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.871537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.871569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.871665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.871691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.871783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.871810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.871898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.871924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.872014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.872041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.872130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.872159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.872242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.872269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.872356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.872382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.872475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.872501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.872597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.872624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.872709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.872738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.872843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.872872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.872966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.872992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.873079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.873106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.873199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.873227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.873318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.873345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.873433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.873461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.873552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.873580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.873683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.873709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.873795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.873823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.873908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.873935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.874026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.874053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.874165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.874208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.874288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.874314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.874420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.874463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.874605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.874650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.874736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.874762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.874848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.874874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.874964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.874990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.875072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.875098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.875185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.875212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.697 [2024-07-11 02:46:53.875307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.697 [2024-07-11 02:46:53.875335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.697 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.875425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.875455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.875561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.875590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.875678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.875704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.875797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.875825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.875909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.875936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.876024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.876050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.876146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.876172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.876254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.876280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.876373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.876399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.876484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.876515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.876610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.876635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.876771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.876830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.876925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.876955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.877041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.877067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.877150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.877177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.877283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.877310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.877401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.877430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.877521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.877549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.877639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.877667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.877760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.877789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.877900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.877927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.878020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.878048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.878155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.878183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.878282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.878307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.878408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.878438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.878544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.878570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.878657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.878682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.878765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.878790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.878884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.878910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.878997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.879023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.879106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.879132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.879235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.879260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.879344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.879370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.879467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.879496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.879595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.879624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.879723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.879751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.879864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.879895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.879999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.880026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.880107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.880133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.880220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.698 [2024-07-11 02:46:53.880247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.698 qpair failed and we were unable to recover it. 00:41:03.698 [2024-07-11 02:46:53.880345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.880372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.880486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.880561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.880647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.880674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.880758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.880785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.880875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.880901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.880989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.881015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.881126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.881154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.881243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.881269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.881358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.881384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.881496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.881527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.881612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.881638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.881724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.881750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.881837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.881863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.881952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.881978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.882085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.882110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.882203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.882229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.882438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.882485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.882625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.882670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.882774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.882807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.882908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.882939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.883026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.883054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.883145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.883173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.883288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.883347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.883432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.883458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.883551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.883577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.883664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.883692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.883797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.883864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.883977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.884035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.884141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.884175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.884299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.884328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.884420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.884466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.884581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.884616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.884724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.884752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.884846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.884872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.884960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.884987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.885071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.885097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.885179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.885205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.885291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.699 [2024-07-11 02:46:53.885318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.699 qpair failed and we were unable to recover it. 00:41:03.699 [2024-07-11 02:46:53.885399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.885425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.885507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.885538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.885618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.885643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.885725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.885751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.885844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.885869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.885963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.885992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.886086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.886113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.886207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.886237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.886326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.886360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.886459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.886486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.886585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.886613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.886708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.886735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.886821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.886847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.886934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.886960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.887046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.887073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.887157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.887183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.887275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.887302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.887391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.887419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.887533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.887561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.887654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.887684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.887807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.887859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.887952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.887980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.888110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.888163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.888256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.888284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.888375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.888402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.888491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.888527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.888639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.888670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.888771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.888799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.888897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.888924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.889027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.889061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.889178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.889208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.889311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.700 [2024-07-11 02:46:53.889336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.700 qpair failed and we were unable to recover it. 00:41:03.700 [2024-07-11 02:46:53.889431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.889457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.889546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.889572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.889661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.889690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.889785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.889815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.889903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.889930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.890021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.890047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.890139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.890166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.890251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.890278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.890371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.890399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.890491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.890529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.890619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.890646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.890739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.890766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.890856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.890884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.890980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.891007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.891096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.891123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.891208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.891233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.891324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.891355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.891445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.891471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.891562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.891587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.891676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.891702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.891787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.891813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.891907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.891939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.892024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.892050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.892144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.892173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.892256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.892282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.892374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.892400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.892496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.892529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.892622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.892650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.892733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.892760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.892851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.892877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.892963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.892991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.893081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.893106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.893189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.893215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.893298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.893323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.893416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.893441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.893529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.893555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.701 qpair failed and we were unable to recover it. 00:41:03.701 [2024-07-11 02:46:53.893645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.701 [2024-07-11 02:46:53.893670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.893760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.893785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.893892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.893918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.894008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.894033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.894126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.894152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.894262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.894296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.894403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.894431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.894524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.894556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.894647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.894674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.894768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.894795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.894884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.894911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.895002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.895029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.895122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.895151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.895233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.895259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.895350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.895378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.895473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.895500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.895609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.895637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.895730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.895757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.895846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.895873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.895967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.895994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.896078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.896104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.896195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.896223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.896312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.896338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.896431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.896457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.896544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.896571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.896660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.896687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.896777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.896803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.896890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.896915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.897010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.897035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.897129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.897158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.897248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.897276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.897368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.897396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.897487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.897519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.897608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.897635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.897755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.897802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.897903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.897934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.898048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.898081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.898189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.898216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.898296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.898322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.898407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.898436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.702 qpair failed and we were unable to recover it. 00:41:03.702 [2024-07-11 02:46:53.898535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.702 [2024-07-11 02:46:53.898562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.898648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.898674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.898759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.898785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.898874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.898900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.898979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.899005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.899096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.899123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.899221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.899254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.899339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.899367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.899464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.899493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.899599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.899627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.899713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.899740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.899832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.899860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.899943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.899969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.900065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.900093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.900191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.900218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.900307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.900335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.900428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.900454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.900567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.900612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.900703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.900732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.900821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.900847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.900940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.900967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.901064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.901093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.901183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.901209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.901294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.901320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.901406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.901431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.901530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.901557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.901641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.901666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.901759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.901785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.901871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.901897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.901982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.902007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.902091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.902117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.902206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.902232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.902312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.902337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.902427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.902452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.902541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.902568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.902652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.902678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.902763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.902788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.902875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.902901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.703 qpair failed and we were unable to recover it. 00:41:03.703 [2024-07-11 02:46:53.902994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.703 [2024-07-11 02:46:53.903024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.903113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.903140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.903228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.903256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.903347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.903374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.903466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.903497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.903605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.903659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.903759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.903789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.903900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.903929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.904040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.904065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.904152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.904177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.904276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.904303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.904397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.904423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.904504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.904537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.904620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.904647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.904739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.904765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.904848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.904874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.904957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.904982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.905099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.905125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.905214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.905239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.905336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.905365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.905466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.905493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.905597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.905624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.905721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.905747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.905843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.905873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.905973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.906001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.906091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.906118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.906200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.906228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.906315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.906341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.906426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.906453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.906546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.906575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.906684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.906715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.906820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.906847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.906940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.906969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.907057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.907084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.907168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.907197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.907286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.907312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.907398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.907426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.907526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.907553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.907647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.907674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.907759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.704 [2024-07-11 02:46:53.907786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.704 qpair failed and we were unable to recover it. 00:41:03.704 [2024-07-11 02:46:53.907879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.907904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.907990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.908016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.908110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.908140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.908236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.908264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.908373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.908401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.908495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.908535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.908629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.908656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.908757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.908783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.908886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.908949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.909070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.909102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.909210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.909245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.909348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.909377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.909499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.909556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.909647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.909674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.909761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.909787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.909881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.909908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.910001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.910028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.910121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.910150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.910238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.910264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.910353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.910379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.910471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.910499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.910595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.910622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.910735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.910797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.910910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.910938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.911035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.911063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.911153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.911181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.911263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.911288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.911396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.911433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.911533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.911561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.911650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.911677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.911770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.911796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.911884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.705 [2024-07-11 02:46:53.911912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.705 qpair failed and we were unable to recover it. 00:41:03.705 [2024-07-11 02:46:53.912006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.912033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.912130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.912156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.912248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.912276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.912362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.912390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.912483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.912517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.912611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.912641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.912747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.912775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.912858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.912886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.912992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.913025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.913125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.913152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.913255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.913289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.913409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.913442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.913637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.913679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.913826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.913873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.913974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.914002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.914179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.914206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.914290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.914316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.914431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.914465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.914586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.914620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.914713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.914740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.914825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.914852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.914940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.914965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.915067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.915093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.915177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.915202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.915291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.915316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.915422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.915467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.915567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.915597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.915686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.915713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.915804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.915830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.915942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.915977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.916078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.916106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.916201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.916228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.916324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.916351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.916439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.916464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.916551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.916578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.916695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.916721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.706 [2024-07-11 02:46:53.916814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.706 [2024-07-11 02:46:53.916840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.706 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.916938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.916964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.917054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.917083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.917199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.917239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.917362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.917407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.917502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.917551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.917662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.917708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.917793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.917819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.917904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.917931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.918023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.918055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.918148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.918175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.918263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.918289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.918373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.918399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.918487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.918523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.918616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.918641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.918727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.918753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.918845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.918871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.918965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.918992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.919084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.919110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.919201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.919227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.919326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.919353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.919452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.919478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.919570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.919596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.919692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.919718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.919814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.919839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.919942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.919969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.920090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.920137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.920231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.920261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.920368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.920403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.920519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.920547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.920651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.920678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.920772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.920799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.920888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.920914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.921002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.921028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.921111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.921138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.921229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.921254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.921348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.921379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.707 qpair failed and we were unable to recover it. 00:41:03.707 [2024-07-11 02:46:53.921487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.707 [2024-07-11 02:46:53.921526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.921633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.921659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.921746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.921771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.921862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.921888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.921993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.922023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.922121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.922155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.922267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.922311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.922401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.922428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.922522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.922550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.922634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.922661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.922742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.922769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.922853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.922881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.922973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.923000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.923099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.923129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.923225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.923252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.923339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.923368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.923455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.923481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.923584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.923612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.923706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.923733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.923830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.923857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.923945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.923973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.924082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.924113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.924228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.924256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.924353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.924382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.924472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.924498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.924599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.924626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.924721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.924752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.924865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.924891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.924994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.925020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.925107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.925133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.925221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.925247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.925347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.925377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.925486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.925522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.925630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.925656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.925748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.925776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.925866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.925893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.925979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.926006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.926092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.926119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.926206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.926235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.926325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.926353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.926449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.708 [2024-07-11 02:46:53.926477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.708 qpair failed and we were unable to recover it. 00:41:03.708 [2024-07-11 02:46:53.926585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.926615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.926703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.926731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.926817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.926843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.926924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.926950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.927040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.927067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.927162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.927191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.927279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.927304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.927392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.927418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.927515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.927542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.927631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.927657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.927759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.927788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.927889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.927914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.928016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.928042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.928124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.928150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.928236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.928263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.928360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.928386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.928469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.928494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.928595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.928621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.928713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.928738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.928819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.928844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.928932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.928958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.929041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.929067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.929169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.929196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.929296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.929322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.929411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.929436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.929528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.929555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.929647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.929672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.929761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.929791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.929877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.929903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.929993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.930023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.709 [2024-07-11 02:46:53.930109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.709 [2024-07-11 02:46:53.930135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.709 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.930226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.930251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.930349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.930376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.930464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.930491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.930606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.930640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.930751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.930779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.930875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.930903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.930996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.931022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.931110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.931137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.931233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.931259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.931352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.931379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.931464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.931490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.931596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.931627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.931720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.931746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.931831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.931857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.931960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.931991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.932111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.932153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.932258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.932294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.932391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.932417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.932524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.932553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.932650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.932677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.932767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.932793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.932878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.932911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.933013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.933040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.933136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.933163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.933254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.933280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.933372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.933399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.933488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.933524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.933629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.933658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.933764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.933792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.933876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.933903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.933989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.934015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.934102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.934128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.934208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.934234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.934324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.934353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.934440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.934465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.934569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.934595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.934683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.934708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.934793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.934820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.934929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.934968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.935064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.935090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.710 qpair failed and we were unable to recover it. 00:41:03.710 [2024-07-11 02:46:53.935180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.710 [2024-07-11 02:46:53.935210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.935313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.935341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.935444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.935472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.935571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.935598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.935685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.935712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.935800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.935828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.935927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.935954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.936035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.936061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.936156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.936185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.936284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.936311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.936400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.936428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.936528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.936555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.936647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.936673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.936757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.936783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.936868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.936893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.936996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.937021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.937104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.937129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.937212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.937237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.937325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.937351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.937442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.937470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.937592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.937618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.937722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.937762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.937861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.937889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.937974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.938000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.938089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.938116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.938198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.938224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.938309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.938336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.938428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.938456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.938554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.938581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.938667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.938693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.938784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.938810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.938909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.938935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.939028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.939053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.939140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.939165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.939253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.939278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.939369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.939399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.939515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.939543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.939637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.939664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.711 qpair failed and we were unable to recover it. 00:41:03.711 [2024-07-11 02:46:53.939756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.711 [2024-07-11 02:46:53.939785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.939871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.939899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.939997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.940025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.940120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.940148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.940234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.940260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.940356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.940383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.940478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.940503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.940599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.940625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.940711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.940737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.940823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.940848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.940932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.940957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.941052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.941077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.941161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.941186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.941279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.941304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.941397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.941423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.941519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.941546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.941629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.941655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.941754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.941780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.941889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.941915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.941995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.942020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.942111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.942138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.942235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.942261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.942354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.942380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.942465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.942490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.942593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.942628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.942720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.942750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.942843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.942871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.942962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.942988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.943078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.943105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.943188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.943214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.943310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.943336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.943426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.943452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.943535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.943562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.943652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.943679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.943773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.943801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.943895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.943922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.944021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.944050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.944146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.944175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.944270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.944297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.944380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.944407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.944494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.944526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.944608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.944635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.944732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.712 [2024-07-11 02:46:53.944761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.712 qpair failed and we were unable to recover it. 00:41:03.712 [2024-07-11 02:46:53.944859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.944887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.944984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.945010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.945103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.945130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.945215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.945242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.945344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.945371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.945463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.945491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.945592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.945618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.945706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.945733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.945834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.945862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.945958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.945986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.946073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.946101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.946190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.946217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.946305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.946331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.946423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.946450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.946544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.946570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.946658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.946686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.946772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.946799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.946890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.946916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.947008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.947035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.947121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.947148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.947237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.947265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.947348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.947382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.947471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.947497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.947594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.947623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.947717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.947744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.947828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.947855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.947939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.947965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.948046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.948073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.948161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.948187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.948281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.948309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.948399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.948424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.948515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.948542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.948630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.948656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.948751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.948777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.948863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.948888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.948978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.949003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.949089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.949116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.949199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.949225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.949314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.949339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.949425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.949452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.949555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.949581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.949670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.949696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.949786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.949811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.713 [2024-07-11 02:46:53.949899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.713 [2024-07-11 02:46:53.949926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.713 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.950042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.950067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.950160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.950190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.950285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.950313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.950407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.950435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.950524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.950557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.950640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.950667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.950760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.950786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.950880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.950908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.950996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.951022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.951113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.951142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.951229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.951256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.951355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.951382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.951464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.951491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.951618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.951646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.951823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.951850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.951943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.951970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.952062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.952090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.952183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.952210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.952314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.952346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.952436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.952462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.952553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.952580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.952662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.952688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.952794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.952820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.952909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.952935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.953025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.953053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.953161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.953190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.953278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.953305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.953399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.953426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.953529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.953557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.953662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.953691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.953793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.953822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.953921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.953948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.954036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.954062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.954151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.954177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.954264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.954291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.954378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.954404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.954487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.954517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.954641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.714 [2024-07-11 02:46:53.954667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.714 qpair failed and we were unable to recover it. 00:41:03.714 [2024-07-11 02:46:53.954758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.954783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.954863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.954888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.954978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.955007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.955116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.955143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.955234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.955261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.955351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.955377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.955468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.955494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.955594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.955621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.955702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.955728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.955824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.955851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.955942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.955970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.956058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.956085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.956179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.956206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.956299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.956329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.956413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.956440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.956534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.956562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.956683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.956716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.956804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.956832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.956918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.956946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.957039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.957066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.957161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.957189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.957281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.957308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.957400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.957428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.957517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.957544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.957646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.957672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.957754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.957782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.957863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.957889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.957987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.958013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.958104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.958131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.958229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.958256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.958352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.958379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.958472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.958500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.958600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.958626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.958709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.958741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.958832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.958857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.958940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.958966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.959057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.959083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.959177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.959204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.959304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.959330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.959424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.959450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.959538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.959565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.959655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.959681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.959775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.959801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.715 [2024-07-11 02:46:53.959892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.715 [2024-07-11 02:46:53.959919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.715 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.960001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.960026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.960108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.960134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.960232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.960260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.960348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.960374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.960456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.960483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.960581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.960607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.960698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.960725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.960807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.960833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.960917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.960943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.961033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.961062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.961163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.961190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.961274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.961303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.961385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.961411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.961499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.961534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.961622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.961649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.961745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.961771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.961856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.961887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.961983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.962009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.962108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.962134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.962214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.962240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.962335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.962361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.962460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.962489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.962594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.962620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.962707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.962733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.962818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.962843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.962958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.962984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.963135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.963174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.963346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.963373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.963466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.963492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.963594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.963621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.963769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.963810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.963892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.963918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.964010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.964039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.964146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.964172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.964253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.964279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.964365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.964392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.964488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.964521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.964615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.964643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.964741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.964767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.964857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.964885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.964970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.964997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.716 qpair failed and we were unable to recover it. 00:41:03.716 [2024-07-11 02:46:53.965085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.716 [2024-07-11 02:46:53.965114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.965200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.965228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.965367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.965407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.965506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.965541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.965641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.965669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.965792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.965818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.965900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.965927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.966023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.966051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.966147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.966176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.966263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.966288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.966376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.966402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.966487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.966522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.966610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.966638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.966732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.966758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.966853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.966881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.966968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.966999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.967081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.967107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.967203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.967229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.967321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.967349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.967437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.967463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.967558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.967585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.967678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.967703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.967793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.967820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.967906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.967933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.968018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.968043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.968164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.968190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.968287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.968316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.968408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.968435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.968535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.968565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.968663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.968691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.968777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.968804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.968887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.968918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.969012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.969040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.969161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.969187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.969272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.969299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.969387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.969413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.969506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.969539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.969632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.969661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.969778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.969841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.969922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.969948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.970029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.970055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.970174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.970201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.970292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.970322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.970418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.970444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.970536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.970565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.717 [2024-07-11 02:46:53.970658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.717 [2024-07-11 02:46:53.970685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.717 qpair failed and we were unable to recover it. 00:41:03.718 [2024-07-11 02:46:53.970778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.718 [2024-07-11 02:46:53.970805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.970903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.970931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.971021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.971048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.971133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.971160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.971246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.971274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.971356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.971381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.971472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.971498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.971589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.971614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.971703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.971729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.971822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.971851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.971955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.971982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.972099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.972126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.972211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.972237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.972323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.972350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.972434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.972461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.972555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.972582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.972666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.972692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.972775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.972801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.972891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.972916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.973013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.973038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.973156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.973181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.973328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.973356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.973475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.720 [2024-07-11 02:46:53.973502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.720 qpair failed and we were unable to recover it. 00:41:03.720 [2024-07-11 02:46:53.973605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.973644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.973835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.973863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.973972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.974036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.974144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.974211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.974303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.974331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.974425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.974453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.974557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.974585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.974671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.974699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.974789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.974815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.974909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.974936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.975024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.975051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.975138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.975166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.975270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.975299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.975435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.975499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.975599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.975625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.975715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.975743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.975916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.975943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.976110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.976136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.976223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.976250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.976333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.976359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.976450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.976477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.976571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.976600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.976723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.976752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.976862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.976939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.977074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.977102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.977233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.977288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.977380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.977405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.977504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.977536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.977626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.977651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.977732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.977758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.977848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.977874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.977972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.978000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.978101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.978127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.978213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.978240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.978326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.978352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.978461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.978487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.978625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.978675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.978791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.978833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.978936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.978978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.979075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.979104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.979209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.979240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.979337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.979367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.979472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.979502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.979616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.979643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.979730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.979756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.979848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.979876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.979970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.979998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.980090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.980119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.980240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.980269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.980407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.980448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.980536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.980565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.980676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.980737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.980845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.980887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.980994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.981038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.981159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.981214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.981358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.981386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.721 [2024-07-11 02:46:53.981474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.721 [2024-07-11 02:46:53.981502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.721 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.981603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.981634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.981764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.981805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.981978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.982006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.982134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.982185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.982299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.982326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.982447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.982476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.982583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.982615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.982804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.982833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.982937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.982978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.983092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.983151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.983255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.983299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.983386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.983413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.983524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.983567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.983671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.983701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.983824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.983867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.983979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.984035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.984127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.984155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.984240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.984266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.984365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.984395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.984494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.984528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.984674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.984742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.984852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.984896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.985005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.985070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.985180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.985242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.985414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.985441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.985536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.985564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.985679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.985721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.985827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.985872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.985980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.986022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.986112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.986139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.986228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.986256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.986337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.986364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.986478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.986550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.986647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.986674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.986782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.986823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.986930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.986972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.987145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.987171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.987297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.987322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.987431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.987474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.987572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.987601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.987730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.987757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.987882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.987934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.988023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.988050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.988174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.988230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.988337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.988381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.988471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.988500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.988688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.988739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.988822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.988849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.988983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.989031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.989134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.989165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.989263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.989294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.989381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.989407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.989492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.989524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.989617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.989645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.989734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.989761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.989880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.989906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.990022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.990064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.990146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.990172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.990266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.990306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.990403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.990431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.990524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.990552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.990672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.990699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.990782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.990808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.990892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.990919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.991106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.991132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.991225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.991253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.722 qpair failed and we were unable to recover it. 00:41:03.722 [2024-07-11 02:46:53.991337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.722 [2024-07-11 02:46:53.991363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.991451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.991477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.991575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.991602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.991688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.991715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.991800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.991828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.991930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.991970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.992065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.992093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.992187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.992215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.992311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.992338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.992461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.992488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.992577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.992604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.992709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.992758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.992867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.992908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.993020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.993063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.993169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.993213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.993293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.993319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.993484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.993520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.993623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.993654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.993787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.993815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.993905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.993932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.994018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.994045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.994129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.994155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.994236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.994262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.994348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.994377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.994464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.994490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.994592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.994620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.994709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.994735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.994844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.994903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.995014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.995071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.995184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.995227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.995339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.995400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.995490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.995524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.995615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.995643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.995733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.995760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.995851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.995877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.995965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.995992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.996086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.996114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.996206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.996235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.996341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.996369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.996458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.996484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.996583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.996610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.996698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.996726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.996850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.996877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.996995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.997023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.997116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.997144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.997263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.997290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.997391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.997418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.997515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.997544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.997663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.997690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.997807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.997834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.997921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.997949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.998081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.998136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.998239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.998265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.998377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.998429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.998527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.998556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.998641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.998668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.998751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.998778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.723 [2024-07-11 02:46:53.998896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.723 [2024-07-11 02:46:53.998923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.723 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:53.999013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:53.999040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:53.999130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:53.999158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:53.999250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:53.999277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:53.999372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:53.999398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:53.999498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:53.999551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:53.999649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:53.999677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:53.999792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:53.999869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.000041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.000070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.000174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.000238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.000349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.000392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.000487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.000522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.000644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.000670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.000760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.000787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.001018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.001044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.001127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.001154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.001261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.001306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.001396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.001422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.001516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.001544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.001647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.001690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.001816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.001858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.001949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.001984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.002074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.002101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.002218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.002245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.002360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.002387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.002497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.002546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.002655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.002718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.002818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.002850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.002968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.003014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.003244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.003271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.003374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.003417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.003535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.003563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.003660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.003691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.003810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.003853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.003936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.003962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.004053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.004080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.004195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.004256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.004367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.004409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.004499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.004533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.004647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.004674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.004783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.004846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.004959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.005005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.005101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.005130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.005243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.005287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.005377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.005404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.005497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.005532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.005624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.005651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.005752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.005783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.005889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.005916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.006001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.006027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.006139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.006184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.006271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.006298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.006401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.006432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.006628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.006656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.006778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.006833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.006937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.006980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.007079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.007105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.007283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.007347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.007462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.007530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.007638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.007671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.007794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.007859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.007961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.008008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.008130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.008184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.008356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.008383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.008501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.008562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.008650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.008677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.008762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.008789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.008892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.008925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.009166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.009194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.009378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.009436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.009522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.724 [2024-07-11 02:46:54.009549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-07-11 02:46:54.009630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.009656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.009770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.009813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.009923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.009956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.010077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.010120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.010276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.010324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.010421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.010454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.010599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.010658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.010753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.010781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.010864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.010891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.011013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.011065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.011154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.011183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.011274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.011302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.011427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.011484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.011626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.011677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.011786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.011828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.011958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.012023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.012117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.012143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.012238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.012285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.012468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.012497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.012594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.012623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.012733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.012796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.012949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.012975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.013057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.013084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.013189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.013231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.013332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.013363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.013471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.013500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.013611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.013644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.013788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.013832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.013943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.013986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.014168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.014219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.014400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.014458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.014589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.014642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.014792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.014864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.014969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.014996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.015084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.015110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.015212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.015242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.015378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.015439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.015543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.015571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.015763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.015818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.015929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.015984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.016074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.016104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.016229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.016285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.016395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.016459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.016568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.016631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.016749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.016777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.016866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.016892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.016983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.017009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.017099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.017126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.017212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.017239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.017320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.017346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.017462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.017489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.017654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.017696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.017854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.017900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.018015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.018060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.018220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.018272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.018361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.018388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.018500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.018550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.018645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.018672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.018801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.018852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.018965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.019011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.019131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.019182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.019290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.019349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.019482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.019571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.019673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.019717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.019826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.019888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.725 qpair failed and we were unable to recover it. 00:41:03.725 [2024-07-11 02:46:54.020025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.725 [2024-07-11 02:46:54.020108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.020190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.020217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.020332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.020387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.020563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.020598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.020779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.020840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.020948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.020991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.021098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.021143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.021251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.021295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.021461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.021489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.021652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.021704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.021806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.021850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.021958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.021999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.022082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.022108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.022209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.022240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.022410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.022468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.022668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.022710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.022818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.022848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.023001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.023029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.023172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.023224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.023362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.023417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.023530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.023575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.023667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.023693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.023781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.023806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.023908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.023970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.024049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.024074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.024203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.024247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.024339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.024367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.024460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.024489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.024579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.024606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.024739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.024785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.024875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.024903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.025005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.025039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.025195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.025243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.025356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.025420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.025506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.025541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.025659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.025701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.025792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.025820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.025913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.025941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.026054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.026097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.026190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.026217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.026323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.026348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.026439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.026468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.026587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.026635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.026744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.026805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.026900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.026927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.027035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.027079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.027190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.027238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.027344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.027389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.027497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.027551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.027666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.027719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.027822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.027866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.027951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.027978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.028093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.028136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.028225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.028253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.028376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.028420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.028515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.028545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.028655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.028699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.028794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.028823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.028908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.028935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.029018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.029045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.029140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.029168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.029257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.029284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.029375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.029402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.726 [2024-07-11 02:46:54.029502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.726 [2024-07-11 02:46:54.029534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.726 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.029618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.029645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.029763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.029791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.029910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.029970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.030061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.030090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.030239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.030293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.030380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.030408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.030539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.030593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.030700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.030745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.030860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.030903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.031027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.031087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.031220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.031275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.031371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.031398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.031486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.031519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.031612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.031639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.031729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.031755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.031853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.031880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.031965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.031992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.032104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.032131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.032224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.032253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.032346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.032375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.032468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.032500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.032606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.032634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.032748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.032796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.032885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.032912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.032999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.033027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.033112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.033141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.033226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.033254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.033366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.033393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.033476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.033503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.033603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.033631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.033724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.033750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.033843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.033872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.033986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.034047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.034196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.034279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.034369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.034396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.034526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.034584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.034676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.034702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.034811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.034854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.034967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.035009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.035113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.035161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.035250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.035277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.035375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.035407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.035517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.035545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.035652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.035686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.035808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.035854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.035934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.035959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.036055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.036081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.036167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.036193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.036281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.036308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.036420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.036476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.036572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.036602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.036692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.036720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.036804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.036830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.036944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.036970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.037062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.037088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.037185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.037212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.037303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.037330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.037427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.037455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.037537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.037563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.037651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.037678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.037771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.037797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.037879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.037904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.037989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.038015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.038139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.038164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.038261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.038292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.038403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.038432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.038530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.038559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.038670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.038729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.038898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.038925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.039121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.727 [2024-07-11 02:46:54.039176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.727 qpair failed and we were unable to recover it. 00:41:03.727 [2024-07-11 02:46:54.039295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.039340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.039452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.039496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.039608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.039654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.039829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.039857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.040029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.040056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.040170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.040214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.040311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.040351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.040448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.040475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.040597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.040633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.040737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.040763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.040937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.040965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.041060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.041089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.041209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.041235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.041363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.041408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.041582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.041645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.041752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.041797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.041920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.041967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.042075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.042119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.042201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.042228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.042346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.042395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.042491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.042525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.042614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.042643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.042763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.042792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.042909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.042963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.043136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.043170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.043294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.043327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.043477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.043503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.043612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.043639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.043749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.043796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.043956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.043998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.044120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.044175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.044335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.044384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.044466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.044493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.044690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.044719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.044890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.044917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.045024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.045069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.045189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.045235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.045341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.045388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.045501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.045569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.045741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.045794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.045904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.045949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.046112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.046165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.046254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.046281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.046442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.046501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.046679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.046728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.046845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.046887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.047023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.047081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.047243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.047303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.047467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.047537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.047687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.047744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.047839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.047867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.047976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.048021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.048194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.048221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.048400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.048462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.048576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.048621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.048751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.048807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.048919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.048963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.049116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.049150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.049345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.049396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.049569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.049630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.049752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.049778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.049893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.049933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.050113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.050139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.050296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.050351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.050437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.050463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.050550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.050576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.728 qpair failed and we were unable to recover it. 00:41:03.728 [2024-07-11 02:46:54.050665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.728 [2024-07-11 02:46:54.050690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.050851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.050910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.051018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.051062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.051155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.051180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.051336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.051385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.051565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.051593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.051713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.051757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.051865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.051917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.052097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.052163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.052335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.052387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.052477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.052503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.052678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.052731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.052898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.052953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.053066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.053109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.053215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.053260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.053382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.053454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.053643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.053673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.053845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.053879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.054010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.054054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.054161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.054196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.054299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.054329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.054425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.054454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.054573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.054632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.054839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.054866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.055018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.055084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.055264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.055316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.055426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.055470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.055611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.055655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.055834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.055861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.055970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.056013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.056119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.056163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.056317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.056381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.056571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.056601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.056763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.056790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.056918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.056984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.057112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.057158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.057271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.057319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.057406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.057433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.057546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.057584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.057758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.057811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.057989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.058018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.058139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.058166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.058260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.058288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.058397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.058442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.058536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.058565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.058731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.058785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.058873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.058899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.059058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.059111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.059230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.059273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.059439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.059465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.059578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.059614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.059731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.059776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.059887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.059931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.060070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.060132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.060272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.060314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.060467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.060541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.060691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.060749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.060863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.060905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.061022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.061066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.061193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.061245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.061423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.061481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.061635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.061683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.061775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.729 [2024-07-11 02:46:54.061803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.729 qpair failed and we were unable to recover it. 00:41:03.729 [2024-07-11 02:46:54.061888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.061915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.062021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.062068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.062168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.062205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.062348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.062375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.062541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.062594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.062712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.062739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.062857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.062916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.063032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.063077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.063270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.063297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.063389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.063418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.063636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.063688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.063843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.063911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.064004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.064032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.064194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.064249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.064336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.064365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.064477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.064526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.064628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.064655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.064766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.064811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.064939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.065020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.065103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.065129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.065245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.065308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.065448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.065474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.065680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.065707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.065824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.065869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.065958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.065986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.066081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.066109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.066264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.066347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.066520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.066549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.066709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.066758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.066890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.066937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.067048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.067091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.067213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.067239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.067371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.067399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.067507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.067559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.067682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.067725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.067815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.067842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.067930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.067957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.068083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.068133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.068254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.068315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.068418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.068455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.068597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.068641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.068737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.068763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.068869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.068913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.068994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.069021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.069137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.069172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.069296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.069340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.069454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.069480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.069629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.069686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.069805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.069868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.069964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.069991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:03.730 [2024-07-11 02:46:54.070096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:03.730 [2024-07-11 02:46:54.070142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:03.730 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.070272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.070303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.070438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.070465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.070561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.070589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.070711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.070737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.070829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.070857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.070944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.070971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.071096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.071142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.071263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.071320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.071485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.071521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.071677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.071731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.071908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.071961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.072090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.072154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.072250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.072277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.072387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.072421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.072524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.072553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.049 qpair failed and we were unable to recover it. 00:41:04.049 [2024-07-11 02:46:54.072706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.049 [2024-07-11 02:46:54.072761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.072851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.072877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.072992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.073039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.073155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.073199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.073291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.073318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.073437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.073481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.073640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.073702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.073799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.073827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.073939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.074002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.074159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.074211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.074324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.074368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.074475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.074532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.074646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.074707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.074874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.074927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.075014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.075042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.075151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.075198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.075350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.075415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.075500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.075534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.075706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.075733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.075845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.075889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.075996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.076046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.076136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.076165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.076282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.076324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.076423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.076470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.076599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.076646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.076816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.076843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.076938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.076965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.077073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.077118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.077201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.077228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.077330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.077377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.077466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.077495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.077653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.077703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.077792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.077820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.077924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.077987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.078108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.078153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.078245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.078272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.078368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.050 [2024-07-11 02:46:54.078395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.050 qpair failed and we were unable to recover it. 00:41:04.050 [2024-07-11 02:46:54.078508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.078554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.078678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.078707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.078826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.078859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.078952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.079000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.079206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.079236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.079333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.079362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.079482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.079516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.079609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.079636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.079725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.079751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.079846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.079875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.079975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.080004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.080103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.080131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.080226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.080253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.080341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.080369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.080456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.080483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.080589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.080616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.080723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.080750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.080837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.080864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.080956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.080983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.081078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.081106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.081304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.081330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.081421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.081449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.081584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.081632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.081746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.081790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.081911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.081943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.082076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.082120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.082208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.082236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.082346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.082379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.082484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.082518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.082633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.082682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.082777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.082805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.082897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.082925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.083035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.083077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.083163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.083192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.083279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.083306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.083399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.083427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.083536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.083566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.083665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.083700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.051 [2024-07-11 02:46:54.083826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.051 [2024-07-11 02:46:54.083900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.051 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.084016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.084061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.084166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.084226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.084320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.084348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.084447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.084475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.084579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.084615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.084703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.084730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.084821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.084847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.084935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.084961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.085164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.085191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.085275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.085301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.085386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.085413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.085498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.085532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.085646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.085692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.085793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.085828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.085955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.085989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.086117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.086151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.086275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.086318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.086414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.086442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.086532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.086559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.086651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.086677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.086792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.086818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.086902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.086928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.087017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.087044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.087146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.087191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.087281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.087307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.087428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.087454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.087565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.087594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.087690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.087717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.087801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.087829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.087915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.087942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.088024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.088055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.088176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.088202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.088324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.088350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.088440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.088467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.088584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.088629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.088714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.088741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.088828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.088855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.052 qpair failed and we were unable to recover it. 00:41:04.052 [2024-07-11 02:46:54.088936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.052 [2024-07-11 02:46:54.088963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.089071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.089117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.089209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.089236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.089318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.089345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.089439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.089465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.089585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.089628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.089737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.089793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.089894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.089923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.090094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.090151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.090299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.090332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.090457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.090503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.090602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.090630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.090761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.090816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.090908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.090935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.091042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.091088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.091191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.091238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.091344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.091410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.091557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.091601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.091724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.091751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.091840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.091867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.091961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.091992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.092116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.092143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.092237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.092264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.092364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.092397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.092493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.092531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.092629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.092656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.092781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.092808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.092901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.092929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.093021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.093051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.093145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.093173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.093268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.093295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.093383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.093411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.093502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.093535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.093659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.093686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.093783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.093810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.093903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.093929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.094021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.094047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.094134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.094161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.094246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.094271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.094362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.094389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.094484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.053 [2024-07-11 02:46:54.094522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.053 qpair failed and we were unable to recover it. 00:41:04.053 [2024-07-11 02:46:54.094618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.094652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.094738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.094767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.094863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.094891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.094976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.095003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.095087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.095113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.095236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.095264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.095359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.095395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.095485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.095520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.095614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.095641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.095729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.095756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.095844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.095871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.095960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.095987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.096085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.096112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.096229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.096256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.096350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.096376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.096466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.096493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.096582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.096609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.096703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.096730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.096829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.096892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.096985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.097013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.097110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.097137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.097255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.097282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.097372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.097398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.097482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.097508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.097632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.097661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.097750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.097777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.097865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.097893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.097982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.098008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.098154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.054 [2024-07-11 02:46:54.098199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.054 qpair failed and we were unable to recover it. 00:41:04.054 [2024-07-11 02:46:54.098288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.098315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.098399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.098426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.098521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.098549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.098682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.098735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.098820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.098851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.098934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.098960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.099049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.099076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.099162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.099189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.099281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.099311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.099398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.099425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.099522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.099550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.099662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.099690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.099778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.099805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.099908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.099935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.100018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.100044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.100133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.100160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.100252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.100280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.100372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.100399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.100535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.100584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.100681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.100708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.100805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.100832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.100921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.100949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.101050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.101090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.101191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.101218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.101310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.101341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.101467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.101532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.101658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.101720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.101804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.101830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.101923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.101952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.102051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.102077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.102188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.102218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.102321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.102356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.102455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.102489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.102610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.102639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.102870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.102897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.102987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.103014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.103106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.103133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.103262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.103296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.103404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.055 [2024-07-11 02:46:54.103436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.055 qpair failed and we were unable to recover it. 00:41:04.055 [2024-07-11 02:46:54.103541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.103592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.103698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.103725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.103835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.103895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.104000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.104028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.104124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.104152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.104252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.104279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.104370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.104396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.104485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.104523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.104654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.104686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.104932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.104959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.105053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.105080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.105192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.105236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.105338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.105385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.105468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.105495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.105619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.105665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.105779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.105824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.105925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.105981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.106093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.106138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.106249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.106299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.106392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.106419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.106532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.106579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.106664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.106691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.106778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.106805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.107037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.107065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.107294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.107321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.107426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.107474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.107618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.107673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.107757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.107784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.107889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.107927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.108066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.108121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.108213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.108242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.108333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.108361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.108444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.108477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.108575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.108602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.108698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.108729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.108821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.108849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.108967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.109025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.109260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.109287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.109395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.056 [2024-07-11 02:46:54.109443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.056 qpair failed and we were unable to recover it. 00:41:04.056 [2024-07-11 02:46:54.109555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.109593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.109695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.109722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.109813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.109841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.109925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.109952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.110050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.110078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.110168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.110197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.110284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.110310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.110401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.110429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.110518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.110545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.110638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.110722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.110830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.110867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.110968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.110994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.111076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.111103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.111186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.111214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.111302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.111328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.111414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.111441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.111533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.111564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.111654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.111681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.111765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.111791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.111889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.111919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.112010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.112042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.112250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.112279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.112361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.112387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.112493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.112548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.112641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.112669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.112771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.112816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.112899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.112926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.113035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.113083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.113177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.113204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.113299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.113329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.113464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.113526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.113632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.113659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.113751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.113778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.113871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.113898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.113991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.114018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.114113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.114142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.114226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.114255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.114339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.114366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.114459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.114487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.114582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.114609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.057 [2024-07-11 02:46:54.114699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.057 [2024-07-11 02:46:54.114725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.057 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.114815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.114843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.114957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.115002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.115110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.115159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.115252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.115280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.115401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.115452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.115571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.115628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.115782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.115811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.115896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.115923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.116010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.116036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.116126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.116153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.116248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.116275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.116363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.116392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.116480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.116515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.116610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.116638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.116748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.116794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.116914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.116969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.117051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.117077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.117168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.117194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.117278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.117304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.117391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.117421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.117508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.117539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.117667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.117731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.117817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.117843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.117928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.117957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.118050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.118077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.118172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.118201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.118289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.118317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.118416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.118444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.118534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.118561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.118652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.118678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.118764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.118790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.118873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.118898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.118995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.119020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.058 [2024-07-11 02:46:54.119129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.058 [2024-07-11 02:46:54.119176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.058 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.119274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.119304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.119408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.119435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.119534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.119562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.119646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.119672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.119760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.119787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.119871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.119897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.119991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.120019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.120115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.120144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.120230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.120257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.120345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.120371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.120451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.120478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.120583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.120612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.120699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.120732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.120822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.120858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.120947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.120975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.121060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.121087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.121176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.121204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.121287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.121315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.121408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.121436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.121540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.121568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.121656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.121686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.121794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.121820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.121901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.121927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.122010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.122036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.122122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.122148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.122234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.122261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.122353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.122380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.122473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.122500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.122595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.122620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.122726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.122760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.122859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.122886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.122979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.123008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.123102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.123129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.123226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.123261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.123366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.123392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.123483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.123518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.123610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.123636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.123725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.123753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.123861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.123888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.123974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.059 [2024-07-11 02:46:54.124003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.059 qpair failed and we were unable to recover it. 00:41:04.059 [2024-07-11 02:46:54.124092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.124118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.124208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.124234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.124315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.124342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.124435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.124461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.124550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.124577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.124667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.124693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.124779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.124805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.124891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.124916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.125004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.125030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.125115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.125141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.125247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.125291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.125372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.125399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.125523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.125571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.125696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.125758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.125850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.125878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.125967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.125995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.126104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.126150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.126262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.126309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.126405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.126432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.126526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.126553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.126656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.126682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.126764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.126791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.126878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.126905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.126996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.127025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.127119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.127148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.127238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.127265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.127362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.127388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.127485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.127527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.127643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.127705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.127793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.127820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.127902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.127929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.128038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.128082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.128175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.128204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.128290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.128316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.128425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.128483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.128591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.128619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.128708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.128735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.128823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.128849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.128939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.128967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.129048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.129079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.060 [2024-07-11 02:46:54.129168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.060 [2024-07-11 02:46:54.129195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.060 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.129285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.129311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.129395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.129421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.129505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.129541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.129646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.129680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.129777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.129804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.129887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.129913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.130002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.130032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.130115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.130141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.130228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.130256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.130347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.130373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.130455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.130480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.130573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.130598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.130695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.130721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.130822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.130864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.130949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.130975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.131077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.131121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.131228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.131276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.131377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.131404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.131490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.131532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.131629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.131655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.131743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.131772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.131866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.131894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.131985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.132013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.132108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.132134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.132228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.132255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.132354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.132386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.132486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.132525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.132622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.132649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.132749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.132776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.132862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.132889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.132979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.133005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.133099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.133126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.133225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.133290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.133380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.133407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.133495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.133527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.133617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.133645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.133734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.133761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.133846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.133872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.133960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.133986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.134092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.061 [2024-07-11 02:46:54.134124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.061 qpair failed and we were unable to recover it. 00:41:04.061 [2024-07-11 02:46:54.134219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.134247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.134339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.134366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.134452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.134479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.134581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.134609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.134694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.134720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.134819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.134845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.134939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.134965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.135050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.135077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.135163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.135190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.135275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.135303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.135384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.135411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.135504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.135538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.135629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.135659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.135750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.135777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.135873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.135901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.135981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.136007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.136096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.136122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.136206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.136232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.136320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.136348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.136440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.136467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.136577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.136605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.136699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.136726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.136817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.136848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.136946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.136973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.137059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.137086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.137177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.137204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.137303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.137330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.137424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.137451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.137548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.137574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.137662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.137688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.137773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.137799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.137889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.137914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.137998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.138023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.138125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.138151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.138245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.138272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.138353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.138379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.138468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.138495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.062 qpair failed and we were unable to recover it. 00:41:04.062 [2024-07-11 02:46:54.138584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.062 [2024-07-11 02:46:54.138610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.138692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.138717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.138804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.138835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.138928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.138953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.139042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.139068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.139170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.139197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.139281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.139307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.139397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.139423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.139515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.139542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.139624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.139650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.139738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.139763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.139855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.139880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.139960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.139986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.140074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.140099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.140189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.140216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.140310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.140337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.140429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.140455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.140545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.140572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.140665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.140691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.140780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.140806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.140922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.140985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.141077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.141107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.141203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.141230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.141320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.141346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.141434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.141460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.141549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.141577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.141666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.141693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.141780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.141805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.141895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.141921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.142002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.142032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.142136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.142164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.142253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.142279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.142364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.142390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.142481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.142507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.142618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.142644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.142739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.142766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.142860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.142887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.142970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.142996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.143083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.063 [2024-07-11 02:46:54.143108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.063 qpair failed and we were unable to recover it. 00:41:04.063 [2024-07-11 02:46:54.143199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.143228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.143323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.143352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.143442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.143469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.143559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.143586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.143672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.143699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.143786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.143812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.143937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.143965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.144083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.144136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.144225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.144251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.144340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.144367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.144451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.144478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.144582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.144609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.144702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.144728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.144824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.144850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.144951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.144977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.145068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.145095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.145181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.145207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.145293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.145324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.145422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.145447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.145554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.145581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.145676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.145703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.145797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.145826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.145919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.145951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.146037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.146064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.146152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.146180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.146270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.146296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.146387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.146414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.146497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.146528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.146622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.146648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.146739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.146764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.146853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.146881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.146974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.147001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.147092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.147118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.147208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.147236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.147328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.147356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.147448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.147474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.147584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.147612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.147701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.064 [2024-07-11 02:46:54.147728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.064 qpair failed and we were unable to recover it. 00:41:04.064 [2024-07-11 02:46:54.147814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.147841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.147924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.147950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.148031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.148058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.148145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.148172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.148258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.148284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.148374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.148400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.148536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.148586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.148695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.148723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.148803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.148829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.148916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.148942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.149021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.149047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.149166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.149227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.149325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.149355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.149460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.149501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.149671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.149728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.149824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.149850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.149964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.149992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.150085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.150111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.150201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.150229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.150333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.150360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.150480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.150551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.150635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.150661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.150749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.150775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.150861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.150888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.150971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.150997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.151077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.151105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.151191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.151217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.151347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.151413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.151529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.151558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.151654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.151681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.151772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.151798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.151893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.151920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.152019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.152046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.152169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.152227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.152307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.152333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.152413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.152439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.152541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.152571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.152655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.152681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.152778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.152804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.152899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.152925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.065 [2024-07-11 02:46:54.153011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.065 [2024-07-11 02:46:54.153037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.065 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.153126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.153152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.153246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.153272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.153364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.153390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.153496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.153531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.153628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.153658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.153742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.153769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.153872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.153899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.153995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.154025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.154126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.154151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.154239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.154266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.154349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.154374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.154472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.154497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.154610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.154636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.154724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.154750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.154869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.154895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.154992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.155018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.155124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.155154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.155255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.155281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.155367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.155392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.155489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.155522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.155645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.155672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.155767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.155793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.155879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.155904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.156010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.156037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.156141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.156167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.156260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.156288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.156377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.156404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.156492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.156527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.156661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.156689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.156787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.156812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.156902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.156927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.157018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.157043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.157135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.157160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.157258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.157283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.157382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.157407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.157497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.157531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.157620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.157645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.157734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.157759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.157855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.157880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.066 qpair failed and we were unable to recover it. 00:41:04.066 [2024-07-11 02:46:54.157980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.066 [2024-07-11 02:46:54.158006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.158086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.158112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.158207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.158233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.158327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.158353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.158438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.158464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.158553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.158583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.158672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.158698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.158803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.158832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.158925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.158956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.159044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.159069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.159153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.159178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.159275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.159299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.159383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.159407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.159490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.159527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.159649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.159674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.159766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.159791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.159898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.159940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.160032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.160059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.160151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.160177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.160266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.160293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.160373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.160400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.160517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.160557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.160637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.160662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.160764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.160794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.160897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.160923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.161020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.161045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.161140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.161170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.161276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.161302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.161392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.161418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.161535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.161576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.161665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.161690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.161785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.161813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.161924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.161950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.162045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.162071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.067 [2024-07-11 02:46:54.162180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.067 [2024-07-11 02:46:54.162226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.067 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.162316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.162343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.162435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.162465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.162560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.162586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.162665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.162691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.162801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.162841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.162954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.162994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.163091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.163120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.163219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.163245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.163328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.163353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.163444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.163469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.163561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.163587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.163670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.163694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.163799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.163826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.163943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.163984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.164066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.164092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.164203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.164244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.164329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.164356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.164436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.164463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.164571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.164598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.164695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.164721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.164830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.164872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.164965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.164992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.165081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.165109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.165200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.165227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.165306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.165332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.165411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.165437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.165572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.165617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.165703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.165728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.165821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.165846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.165939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.165964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.166051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.166075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.166175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.166202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.166304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.166328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.166413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.166438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.166526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.166561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.166642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.166668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.166748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.166773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.166862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.166888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.166977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.068 [2024-07-11 02:46:54.167004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.068 qpair failed and we were unable to recover it. 00:41:04.068 [2024-07-11 02:46:54.167123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.167148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.167249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.167276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.167360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.167385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.167497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.167552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.167651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.167677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.167777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.167802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.167889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.167913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.168038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.168078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.168182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.168222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.168313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.168339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.168425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.168451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.168567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.168595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.168704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.168732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.168854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.168895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.168982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.169011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.169114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.169141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.169271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.169296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.169379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.169403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.169500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.169546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.169636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.169661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.169764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.169791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.169888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.169915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.170051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.170112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.170238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.170277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.170396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.170433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.170547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.170575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.170665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.170690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.170774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.170800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.170908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.170933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.171032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.171072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.171182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.171221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.171307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.171335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.171421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.171446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.171541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.171567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.171660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.171686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.171792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.171818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.171908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.171934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.172023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.172048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.172131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.172158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.172239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.172264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.172363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.069 [2024-07-11 02:46:54.172389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.069 qpair failed and we were unable to recover it. 00:41:04.069 [2024-07-11 02:46:54.172491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.172544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.172649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.172677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.172785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.172811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.172916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.172942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.173029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.173054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.173153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.173179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.173271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.173296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.173383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.173407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.173494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.173525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.173633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.173659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.173776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.173818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.173918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.173945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.174055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.174082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.174203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.174242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.174333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.174360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.174476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.174536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.174636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.174675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.174763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.174788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.174876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.174901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.174987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.175011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.175100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.175127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.175216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.175241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.175328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.175354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.175439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.175464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.175571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.175597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.175706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.175744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.175847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.175874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.176001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.176040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.176162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.176189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.176294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.176323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.176419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.176445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.176562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.176590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.176709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.176734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.176852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.176879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.176992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.177018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.177106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.177132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.177222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.177247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.177330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.177356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.177456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.177482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.177599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.070 [2024-07-11 02:46:54.177625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.070 qpair failed and we were unable to recover it. 00:41:04.070 [2024-07-11 02:46:54.177744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.177781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.177876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.177903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.178003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.178029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.178130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.178156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.178239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.178265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.178350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.178375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.178475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.178502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.178624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.178650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.178754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.178780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.178893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.178919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.179019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.179044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.179131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.179156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.179260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.179287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.179388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.179412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.179525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.179569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.179650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.179675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.179762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.179787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.179874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.179899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.180000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.180025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.180106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.180131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.180220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.180250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.180331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.180356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.180441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.180470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.180570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.180596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.180689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.180715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.180798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.180827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.180910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.180935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.181024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.181049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.181141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.181167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.181262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.181287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.181378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.181404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.181495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.181528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.181614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.181641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.181741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.181766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.071 [2024-07-11 02:46:54.181862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.071 [2024-07-11 02:46:54.181894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.071 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.181985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.182012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.182102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.182127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.182211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.182236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.182322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.182347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.182438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.182462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.182550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.182576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.182667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.182692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.182778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.182804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.182892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.182916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.182999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.183024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.183122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.183146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.183244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.183269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.183356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.183382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.183467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.183493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.183585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.183611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.183695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.183722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.183809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.183834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.183917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.183942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.184026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.184051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.184148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.184178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.184272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.184300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.184389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.184414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.184530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.184564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.184656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.184684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.184774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.184802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.184904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.184942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.185036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.185064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.185152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.185179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.185262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.185289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.185375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.185400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.185492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.185525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.185610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.185635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.185715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.185740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.185835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.185860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.185944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.185968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.186066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.186091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.186182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.186207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.186295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.186322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.186407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.186433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.186519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.072 [2024-07-11 02:46:54.186545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.072 qpair failed and we were unable to recover it. 00:41:04.072 [2024-07-11 02:46:54.186632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.186657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.186742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.186768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.186855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.186881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.186971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.186999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.187098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.187124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.187207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.187234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.187319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.187351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.187440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.187465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.187560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.187585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.187675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.187701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.187794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.187819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.187904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.187929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.188012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.188036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.188115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.188139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.188221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.188245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.188348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.188377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.188473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.188500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.188597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.188624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.188711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.188737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.188836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.188862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.188948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.188973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.189062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.189089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.189176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.189201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.189291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.189317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.189405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.189431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.189518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.189543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.189632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.189657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.189744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.189769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.189871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.189903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.189998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.190034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.190143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.190171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.190258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.190284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.190376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.190404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.190489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.190528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.190630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.190658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.190747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.190772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.190858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.190883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.190979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.191005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.073 [2024-07-11 02:46:54.191088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.073 [2024-07-11 02:46:54.191113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.073 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.191209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.191238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.191333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.191360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.191455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.191483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.191856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.191898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.192007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.192037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.192122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.192148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.192241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.192268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.192365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.192391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.192490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.192536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.192631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.192656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.192745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.192771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.192856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.192880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.192966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.192993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.193084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.193111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.193203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.193229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.193332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.193357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.193450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.193480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.193596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.193632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.193743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.193770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.193858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.193883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.193971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.193997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.194104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.194144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.194269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.194323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.194410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.194436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.194539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.194566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.194661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.194686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.194777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.194804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.194902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.194930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.195032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.195067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.195156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.195184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.195269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.195296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.195388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.195426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.195542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.195588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.195694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.195722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.195820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.195846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.195939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.195965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.196046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.196072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.196172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.196198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.196287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.196313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.196400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.196425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.196503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.074 [2024-07-11 02:46:54.196533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.074 qpair failed and we were unable to recover it. 00:41:04.074 [2024-07-11 02:46:54.196632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.196658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.196749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.196774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.196861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.196888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.196977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.197004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.197098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.197124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.197211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.197241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.197336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.197363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.197463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.197496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.197592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.197620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.197719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.197757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.197889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.197937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.198031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.198059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.198149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.198176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.198257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.198283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.198387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.198445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.198545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.198572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.198662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.198688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.198781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.198808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.198905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.198931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.199024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.199050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.199138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.199169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.199260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.199287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.199375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.199402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.199499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.199531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.199632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.199658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.199745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.199771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.199853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.199879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.199959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.199984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.200070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.200096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.200189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.200215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.200318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.200344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.200429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.200455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.200543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.200570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.200663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.200688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.200775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.200800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.200899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.200925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.201015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.201040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.201122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.201147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.201237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.201271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.201365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.201390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.201480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.201507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.201612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.201637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.201728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.075 [2024-07-11 02:46:54.201754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.075 qpair failed and we were unable to recover it. 00:41:04.075 [2024-07-11 02:46:54.201841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.201866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.201955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.201980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.202072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.202097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.202182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.202210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.202300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.202331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.202430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.202457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.202565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.202619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.202706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.202733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.202888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.202933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.203035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.203061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.203177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.203228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.203335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.203385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.203475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.203507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.203636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.203691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.203793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.203838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.203954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.204001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.204102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.204161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.204273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.204325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.204414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.204441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.204539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.204570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.204680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.204722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.204812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.204845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.204938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.204966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.205060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.205087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.205180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.205214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.205303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.205329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.205413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.205441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.205545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.205572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.205662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.205700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.205785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.205810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.205897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.205928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.206007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.206036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.206116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.206153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.206261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.206319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.206414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.206441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.206533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.206563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.206659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.076 [2024-07-11 02:46:54.206685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.076 qpair failed and we were unable to recover it. 00:41:04.076 [2024-07-11 02:46:54.206772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.206807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.206896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.206923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.207010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.207041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.207131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.207156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.207245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.207276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.207376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.207403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.207491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.207525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.207630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.207657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.207750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.207778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.207871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.207897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.207993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.208021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.208119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.208155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.208319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.208348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.208446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.208479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.208596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.208628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.208763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.208808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.208892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.208924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.209016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.209042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.209132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.209167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.209277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.209326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.209440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.209488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.209599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.209628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.209710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.209736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.209838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.209871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.209972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.209999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.210118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.210176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.210281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.210326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.210411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.210437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.210535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.210563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.210662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.210701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.210829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.210876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.210979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.211031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.211141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.211196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.211319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.211383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.211542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.211623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.211737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.211765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.211850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.211875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.211966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.211992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.212097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.212153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.212266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.212318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.212434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.212487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.212606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.077 [2024-07-11 02:46:54.212656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.077 qpair failed and we were unable to recover it. 00:41:04.077 [2024-07-11 02:46:54.212767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.212821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.212944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.212990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.213106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.213157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.213260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.213290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.213379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.213408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.213506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.213538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.213627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.213667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.213754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.213779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.213878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.213919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.214032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.214069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.214163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.214193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.214282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.214313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.214411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.214438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.214524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.214554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.214649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.214685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.214779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.214804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.214898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.214925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.215016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.215047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.215158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.215185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.215272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.215299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.215392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.215427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.215508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.215542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.215628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.215666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.215789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.215846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.215981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.216027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.216124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.216152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.216261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.216290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.216379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.216407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.216493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.216525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.216614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.216646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.216742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.216770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.216866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.216895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.216988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.217016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.217108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.217139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.217230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.217261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.217350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.217376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.217465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.217498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.217640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.217697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.217810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.217874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.217978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.218038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.218145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.218204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.218315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.078 [2024-07-11 02:46:54.218371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.078 qpair failed and we were unable to recover it. 00:41:04.078 [2024-07-11 02:46:54.218483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.218538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.218653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.218701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.218818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.218864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.218972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.219007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.219135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.219185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.219283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.219310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.219390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.219427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.219528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.219555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.219642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.219670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.219765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.219804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.219913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.219959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.220047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.220075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.220164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.220196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.220285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.220313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.220406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.220435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.220533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.220561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.220659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.220697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.220781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.220807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.220905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.220935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.221027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.221056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.221145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.221180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.221279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.221307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.221394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.221427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.221527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.221554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.221637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.221665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.221753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.221778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.221865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.221901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.221989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.222016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.222100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.222137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.222242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.222274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.222368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.222398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.222487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.222528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.222632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.222661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.222748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.222779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.222877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.222904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.223023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.223070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.223163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.223212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.223316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.223351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.223447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.223474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.223580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.223611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.223704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.223732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.223831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.223859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.223946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.223973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.224069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.224099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.224193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.224221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.224317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.224351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.079 [2024-07-11 02:46:54.224443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.079 [2024-07-11 02:46:54.224471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.079 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.224568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.224596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.224692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.224720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.224810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.224836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.224930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.224957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.225063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.225113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.225219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.225266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.225361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.225389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.225553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.225582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.225667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.225693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.225800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.225855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.225963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.226019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.226125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.226189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.226304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.226360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.226476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.226555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.226679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.226728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.226817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.226844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.226936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.226965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.227059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.227111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.227209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.227238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.227321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.227356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.227455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.227484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.227583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.227610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.227705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.227735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.227837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.227871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.227984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.228021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.228140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.228192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.228310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.228360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.228490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.228549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.228650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.228686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.228778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.228805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.228893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.228925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.229016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.229044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.229147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.229177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.229268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.229297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.229380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.229414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.229505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.229545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.229655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.229704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.229792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.229819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.229932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.229984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.080 [2024-07-11 02:46:54.230098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.080 [2024-07-11 02:46:54.230152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.080 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.230245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.230275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.230362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.230399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.230527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.230572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.230667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.230694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.230783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.230818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.230929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.230987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.231100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.231146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.231263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.231320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.231438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.231483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.231577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.231604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.231721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.231770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.231888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.231938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.232037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.232065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.232170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.232220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.232319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.232355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.232462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.232490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.232596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.232625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.232737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.232800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.232932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.232978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.233070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.233098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.233185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.233217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.233304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.233330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.233417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.233447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.233546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.233574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.233662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.233698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.233795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.233822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.233908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.233944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.234031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.234057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.234146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.234182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.234275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.234327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.234436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.234463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.234582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.234637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.234760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.234813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.234929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.234979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.235083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.235133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.235246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.235286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.235394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.235423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.235515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.235542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.235631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.235666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.235758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.235787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.235873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.235907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.236006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.236033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.236117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.236144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.236231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.236257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.236346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.236376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.236475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.236507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.236631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.081 [2024-07-11 02:46:54.236678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.081 qpair failed and we were unable to recover it. 00:41:04.081 [2024-07-11 02:46:54.236790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.236838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.236947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.236997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.237107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.237162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.237247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.237275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.237385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.237433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.237523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.237550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.237655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.237702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.237805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.237860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.237966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.238025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.238127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.238168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.238273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.238301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.238385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.238411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.238525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.238572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.238662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.238689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.238776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.238808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.238916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.238970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.239081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.239141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.239255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.239282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.239370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.239406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.239596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.239625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.239718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.239745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.239831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.239860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.239954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.239981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.240094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.240142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.240264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.240311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.240408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.240436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.240563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.240612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.240729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.240779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.240889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.240942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.241028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.241054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.241143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.241180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.241268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.241293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.241390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.241418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.241501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.241536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.241644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.241691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.241804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.241849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.241950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.242010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.242092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.242120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.242214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.242250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.242352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.242398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.242488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.242531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.242615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.242642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.242749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.242790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.242915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.242964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.243052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.243081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.243176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.243207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.243298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.243331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.082 [2024-07-11 02:46:54.243421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.082 [2024-07-11 02:46:54.243446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.082 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.243569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.243598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.243696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.243722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.243813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.243849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.243942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.243969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.244057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.244092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.244193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.244221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.244308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.244344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.244435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.244460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.244557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.244587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.244680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.244705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.244788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.244819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.244918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.244945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.245030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.245058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.245148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.245175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.245264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.245301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.245389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.245414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.245499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.245532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.245618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.245645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.245731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.245758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.245860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.245909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.246021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.246074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.246192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.246242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.246353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.246401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.246506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.246575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.246672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.246704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.246794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.246822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.246936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.246984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.247088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.247137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.247240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.247288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.247371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.247396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.247521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.247570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.247658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.247685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.247791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.247844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.247949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.247989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.248120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.248182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.248315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.248361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.248445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.248470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.248584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.248636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.248759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.248812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.248914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.083 [2024-07-11 02:46:54.248952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.083 qpair failed and we were unable to recover it. 00:41:04.083 [2024-07-11 02:46:54.249071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.249121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.249244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.249291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.249402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.249461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.249564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.249591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.249678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.249708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.249808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.249835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.249932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.249968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.250075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.250112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.250205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.250230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.250316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.250350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.250437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.250462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.250560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.250621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.250717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.250743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.250836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.250863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.250947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.250974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.251061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.251098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.251197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.251225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.251315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.251351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.251443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.251468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.251565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.251601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.251688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.251713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.251817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.251845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.251927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.251952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.252048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.252107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.252203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.252230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.252341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.252371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.252466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.252492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.252590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.252624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.252729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.252766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.252909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.252950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.253066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.253118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.253208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.253234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.253329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.253388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.253505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.253561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.253657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.253684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.253796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.253848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.253965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.254013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.254121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.254178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.254271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.254304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.254402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.254433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.254527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.254556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.254667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.254715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.254839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.254882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.254994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.255046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.255150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.255201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.255288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.255314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.084 qpair failed and we were unable to recover it. 00:41:04.084 [2024-07-11 02:46:54.255407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.084 [2024-07-11 02:46:54.255440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.255524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.255553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.255660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.255709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.255792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.255817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.255907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.255942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.256034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.256062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.256152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.256188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.256282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.256311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.256405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.256434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.256532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.256559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.256650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.256686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.256774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.256803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.256896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.256931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.257013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.257039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.257127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.257161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.257255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.257281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.257363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.257390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.257473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.257506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.257618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.257656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.257793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.257840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.257926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.257952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.258048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.258075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.258167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.258194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.258290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.258317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.258399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.258426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.258514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.258554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.258675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.258729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.258813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.258838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.258970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.259016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.259103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.259128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.259214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.259251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.259336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.259361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.259448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.259488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.259595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.259623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.259716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.259747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.259836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.259862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.259947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.259981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.260073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.260102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.260201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.260234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.260321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.260349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.260454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.260501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.260609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.260637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.260740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.260779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.260888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.260915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.260998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.261032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.261115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.261141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.085 [2024-07-11 02:46:54.261254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.085 [2024-07-11 02:46:54.261306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.085 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.261431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.261482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.261584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.261611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.261713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.261770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.261876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.261913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.262043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.262091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.262199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.262255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.262364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.262402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.262505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.262543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.262641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.262682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.262817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.262871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.262968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.263009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.263106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.263132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.263237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.263291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.263389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.263415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.263545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.263592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.263716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.263759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.263875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.263930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.264030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.264066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.264162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.264190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.264322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.264373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.264460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.264488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.264608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.264654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.264750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.264776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.264887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.264932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.265039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.265085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.265190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.265254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.265365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.265420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.265526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.265583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.265670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.265696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.265798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.265847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.265932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.265958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.266066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.266114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.266206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.266233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.266317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.266352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.266440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.266465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.266580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.266648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.266770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.266826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.266913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.266940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.267044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.267073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.267170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.267221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.267321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.267350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.267441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.267475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.267635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.267664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.267772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.267829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.267934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.267985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.268085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.086 [2024-07-11 02:46:54.268132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.086 qpair failed and we were unable to recover it. 00:41:04.086 [2024-07-11 02:46:54.268221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.268246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.268348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.268404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.268501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.268555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.268666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.268713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.268812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.268848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.268947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.268983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.269089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.269138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.269258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.269306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.269407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.269434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.269522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.269558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.269671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.269727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.269840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.269876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.269990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.270039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.270136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.270176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.270303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.270350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.270445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.270472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.270565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.270592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.270693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.270730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.270842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.270883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.271012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.271064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.271151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.271182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.271270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.271302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.271391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.271415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.271505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.271545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.271664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.271716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.271815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.271864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.271952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.271981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.272093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.272139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.272230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.272256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.272337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.272371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.272489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.272549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.272635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.272660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.272745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.272772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.272879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.272927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.273026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.273053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.273151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.273185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.273273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.273300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.273391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.273430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.273525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.273551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.087 [2024-07-11 02:46:54.273657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.087 [2024-07-11 02:46:54.273695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.087 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.273809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.273845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.273940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.273968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.274059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.274090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.274185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.274214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.274311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.274341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.274433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.274459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.274547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.274578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.274680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.274738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.274841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.274888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.274998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.275045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.275148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.275200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.275289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.275315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.275403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.275436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.275526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.275552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.275682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.275731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.275826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.275856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.275968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.276014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.276124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.276169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.276275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.276322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.276443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.276489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.276643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.276690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.276811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.276858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.276964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.277018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.277122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.277160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.277286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.277332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.277446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.277493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.277608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.277656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.277778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.277821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.277936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.277984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.278093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.278147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.278253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.278302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.278420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.278468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.278590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.278639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.278755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.278808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.278925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.278973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.279086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.279144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.279249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.279307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.279420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.279474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.279580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.279606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.279692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.279727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.279814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.279842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.279944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.279996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.280083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.280111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.280201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.280236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.280341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.280393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.088 [2024-07-11 02:46:54.280476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.088 [2024-07-11 02:46:54.280502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.088 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.280609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.280662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.280772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.280829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.280926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.280953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.281050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.281105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.281216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.281267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.281374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.281401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.281504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.281567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.281683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.281735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.281850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.281896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.282004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.282060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.282148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.282177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.282265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.282292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.282377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.282404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.282489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.282536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.282647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.282695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.282813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.282860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.282963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.283011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.283116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.283164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.283278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.283325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.283413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.283439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.283525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.283559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.283650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.283677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.283783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.283835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.283938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.283976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.284096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.284147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.284235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.284261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.284349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.284381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.284482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.284514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.284626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.284677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.284784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.284840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.284920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.284945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.285063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.285112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.285221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.285273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.285358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.285384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.285475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.285505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.285613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.285641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.285735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.285762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.285875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.285925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.286031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.286079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.286169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.286203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.286295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.286320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.286403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.286434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.286556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.286622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.286718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.286748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.286843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.286870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.286961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.286997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.089 [2024-07-11 02:46:54.287091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.089 [2024-07-11 02:46:54.287117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.089 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.287212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.287242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.287333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.287360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.287450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.287482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.287592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.287620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.287710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.287743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.287842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.287868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.287974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.288022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.288133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.288180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.288291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.288344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.288431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.288464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.288571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.288600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.288688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.288725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.288870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.288911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.289000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.289028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.289117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.289147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.289264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.289311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.289409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.289439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.289529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.289564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.289666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.289694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.289792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.289820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.289908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.289943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.290036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.290063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.290154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.290184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.290283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.290310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.290400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.290428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.290521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.290555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.290651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.290677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.290762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.290795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.290911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.290960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.291081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.291139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.291225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.291252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.291340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.291367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.291475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.291530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.291630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.291661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.291771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.291821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.291933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.291985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.292093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.292139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.292242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.292297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.292405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.292471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.292580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.292630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.292754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.292810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.292930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.292965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.293136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.293164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.293249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.293276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.293394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.293441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.293541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.293570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.090 [2024-07-11 02:46:54.293667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.090 [2024-07-11 02:46:54.293697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.090 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.293798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.293827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.293922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.293959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.294059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.294087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.294168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.294196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.294282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.294309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.294405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.294433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.294521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.294549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.294708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.294736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.294825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.294853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.294934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.294969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.295062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.295089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.295175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.295203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.295288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.295314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.295429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.295457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.295555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.295583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.295698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.295748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.295843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.295872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.295982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.296033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.296135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.296172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.296304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.296350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.296440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.296467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.296583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.296635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.296742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.296792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.296894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.296940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.297064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.297111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.297215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.297272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.297382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.297438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.297535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.297561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.297670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.297722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.297844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.297890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.297993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.298033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.298172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.298215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.298324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.298371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.298472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.298528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.298624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.298653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.298764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.298815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.298959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.299007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.299116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.299163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.299272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.299321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.091 [2024-07-11 02:46:54.299418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.091 [2024-07-11 02:46:54.299457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.091 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.299570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.299599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.299714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.299763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.299876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.299932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.300037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.300074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.300185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.300218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.300331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.300384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.300480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.300508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.300604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.300640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.300727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.300752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.300844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.300874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.300959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.300986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.301076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.301107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.301206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.301237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.301331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.301369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.301459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.301486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.301601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.301656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.301753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.301782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.301872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.301901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.301995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.302023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.302102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.302129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.302217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.302242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.302348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.092 [2024-07-11 02:46:54.302397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.092 qpair failed and we were unable to recover it. 00:41:04.092 [2024-07-11 02:46:54.302502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.302552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.302665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.302713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.302820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.302874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.302986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.303040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.303150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.303203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.303303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.303332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.303423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.303451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.303570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.303625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.303742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.303794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.303896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.303951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.304035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.304061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.304153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.304184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.304276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.304302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.304382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.304411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.304505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.304536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.304673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.304713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.304823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.304870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.304971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.305005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.305109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.305158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.305284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.305334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.305428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.305462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.305582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.305632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.305741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.305789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.305889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.305925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.306025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.306052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.306164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.306202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.306329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.306376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.306463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.093 [2024-07-11 02:46:54.306490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.093 qpair failed and we were unable to recover it. 00:41:04.093 [2024-07-11 02:46:54.306592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.306629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.306727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.306767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.306864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.306895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.306992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.307022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.307112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.307145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.307245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.307273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.307365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.307395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.307498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.307531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.307621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.307649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.307739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.307768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.307864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.307892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.308000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.308052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.308136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.308163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.308251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.308282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.308400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.308450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.308540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.308567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.308649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.308683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.308773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.308801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.308891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.308925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.309011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.309043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.309131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.309165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.309287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.309335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.309430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.309456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.309548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.309585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.309681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.309708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.309817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.309863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.309964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.310020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.310131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.310176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.310261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.310285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.310373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.310401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.310602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.094 [2024-07-11 02:46:54.310631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.094 qpair failed and we were unable to recover it. 00:41:04.094 [2024-07-11 02:46:54.310748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.310794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.310896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.310951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.311066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.311113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.311232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.311277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.311367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.311394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.311501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.311558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.311679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.311726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.311814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.311839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.311927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.311962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.312071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.312108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.312221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.312259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.312375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.312409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.312505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.312542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.312710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.312738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.312864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.312913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.313006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.313040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.313165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.313215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.313308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.313338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.313449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.313497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.313629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.313680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.313786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.313825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.313956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.314011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.314108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.314144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.314254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.314285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.314399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.314443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.314537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.314563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.314651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.314684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.314785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.314812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.314904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.314936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.315034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.315062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.315151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.315180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.095 [2024-07-11 02:46:54.315269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.095 [2024-07-11 02:46:54.315296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.095 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.315380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.315415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.315504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.315535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.315647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.315689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.315807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.315853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.315955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.315991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.316110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.316160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.316264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.316319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.316471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.316501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.316629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.316678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.316783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.316836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.316952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.317003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.317112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.317166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.317251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.317276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.317356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.317389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.317505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.317565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.317685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.317733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.317822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.317848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.317953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.317998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.318079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.318104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.318213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.318261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.318383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.318436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.318536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.318563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.318672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.318719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.318849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.318892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.318989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.319026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.319138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.319195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.319279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.319305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.319386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.319420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.319515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.319542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.319628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.096 [2024-07-11 02:46:54.319653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.096 qpair failed and we were unable to recover it. 00:41:04.096 [2024-07-11 02:46:54.319735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.319770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.319867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.319895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.320021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.320075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.320163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.320190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.320316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.320370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.320459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.320486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.320641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.320673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.320801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.320863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.320962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.320997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.321120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.321183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.321273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.321299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.321388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.321413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.321570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.321599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.321709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.321770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.321942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.321995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.322096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.322158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.322248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.322278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.322368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.322395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.322497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.322534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.322627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.322654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.322735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.322762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.322851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.322879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.322969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.322996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.097 qpair failed and we were unable to recover it. 00:41:04.097 [2024-07-11 02:46:54.323082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.097 [2024-07-11 02:46:54.323107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.323197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.323227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.323340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.323375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.323484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.323527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.323625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.323656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.323765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.323824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.323926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.323991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.324082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.324111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.324198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.324228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.324317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.324352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.324440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.324470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.324583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.324614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.324700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.324729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.324815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.324849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.324943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.324971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.325061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.325095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.325222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.325258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.325416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.325470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.325567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.325597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.325683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.325709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.325795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.325822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.325911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.325939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.326076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.326138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.326255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.326286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.326410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.326444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.326538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.326565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.326693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.326721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.326822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.326889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.326976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.327003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.098 [2024-07-11 02:46:54.327126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.098 [2024-07-11 02:46:54.327185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.098 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.327276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.327305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.327420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.327470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.327598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.327635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.327724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.327752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.327837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.327870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.328074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.328104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.328240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.328284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.328456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.328522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.328621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.328648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.328770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.328824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.328912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.328939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.329048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.329106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.329225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.329280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.329450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.329500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.329655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.329685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.329793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.329852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.329978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.330032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.330119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.330147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.330254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.330308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.330394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.330420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.330569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.330623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.330747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.330808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.330923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.330977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.331078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.331139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.331319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.331365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.331501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.331550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.331664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.331720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.331875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.331918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.332043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.332102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.332187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.332213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.099 [2024-07-11 02:46:54.332306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.099 [2024-07-11 02:46:54.332333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.099 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.332441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.332495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.332679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.332729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.332838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.332893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.333009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.333070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.333189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.333244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.333327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.333357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.333446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.333480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.333578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.333611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.333697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.333734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.333821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.333848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.333934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.333967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.334082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.334116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.334239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.334299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.334407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.334460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.334571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.334626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.334763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.334819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.334919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.334985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.335092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.335153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.335318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.335372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.335543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.335581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.335697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.335753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.335842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.335871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.335988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.336044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.336172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.336218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.336381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.336430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.336529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.336555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.336644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.336670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.336821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.336850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.336991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.337061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.337185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.100 [2024-07-11 02:46:54.337232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.100 qpair failed and we were unable to recover it. 00:41:04.100 [2024-07-11 02:46:54.337363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.337406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.337498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.337538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.337627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.337654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.337741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.337768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.337950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.338000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.338135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.338197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.338290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.338320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.338444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.338499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.338658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.338686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.338775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.338801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.338905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.338969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.339145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.339202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.339318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.339378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.339553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.339581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.339769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.339822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.339980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.340029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.340120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.340149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.340236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.340264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.340383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.340454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.340664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.340716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.340830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.340878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.340973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.341010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.341179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.341240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.341337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.341365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.341536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.341589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.341777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.341823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.341904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.341929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.342043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.342099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.342210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.342265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.342438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.342492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.101 qpair failed and we were unable to recover it. 00:41:04.101 [2024-07-11 02:46:54.342593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.101 [2024-07-11 02:46:54.342619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.342704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.342731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.342844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.342871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.343007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.343056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.343176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.343236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.343352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.343392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.343484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.343517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.343658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.343710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.343795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.343821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.343934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.343981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.344076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.344105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.344201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.344238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.344406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.344443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.344637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.344691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.344847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.344899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.344994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.345020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.345103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.345131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.345258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.345300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.345419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.345452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.345552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.345578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.345696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.345723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.345907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.345942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.346105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.346158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.346324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.346378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.346473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.346504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.346690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.346739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.346916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.346955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.347184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.347233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.347395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.347448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.347576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.347627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.102 qpair failed and we were unable to recover it. 00:41:04.102 [2024-07-11 02:46:54.347757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.102 [2024-07-11 02:46:54.347811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.347969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.348033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.348214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.348271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.348464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.348532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.348625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.348653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.348752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.348778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.348867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.348895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.349056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.349117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.349305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.349334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.349448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.349509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.349769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.349821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.349994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.350045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.350214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.350264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.350502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.350558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.350745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.350799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.350909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.350944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.351135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.351193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.351323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.351379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.351471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.351499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.351684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.351735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.351937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.351964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.352079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.352138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.352291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.352338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.352426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.352457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.352624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.352678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.103 [2024-07-11 02:46:54.352846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.103 [2024-07-11 02:46:54.352897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.103 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.352986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.353011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.353223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.353274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.353373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.353401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.353525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.353581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.353672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.353698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.353831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.353879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.353978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.354005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.354093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.354121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.354218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.354259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.354429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.354485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.354583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.354612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.354743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.354779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.354878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.354904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.355080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.355132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.355253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.355308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.355480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.355542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.355707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.355735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.355845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.355898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.356087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.356115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.356243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.356301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.356476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.356535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.356664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.356710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.356888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.356918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.357079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.357129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.357248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.357305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.357528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.357574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.357740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.357792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.357957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.358014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.358171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.358224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.358318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.104 [2024-07-11 02:46:54.358344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.104 qpair failed and we were unable to recover it. 00:41:04.104 [2024-07-11 02:46:54.358519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.358565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.358751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.358778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.358950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.359016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.359145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.359196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.359289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.359316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.359486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.359561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.359662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.359689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.359866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.359918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.360131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.360180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.360266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.360292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.360412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.360471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.360659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.360710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.360897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.360953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.361134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.361189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.361329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.361386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.361548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.361577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.361669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.361695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.361890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.361953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.362064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.362101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.362195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.362222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.362313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.362348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.362457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.362484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.362604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.362643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.362770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.362809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.362900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.362929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.363098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.363149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.363239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.363268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.363405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.363464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.363562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.363589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.363679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.363716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.363807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.363834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.105 [2024-07-11 02:46:54.364018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.105 [2024-07-11 02:46:54.364071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.105 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.364233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.364285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.364404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.364460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.364649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.364707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.364881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.364930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.365090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.365139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.365254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.365309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.365431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.365460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.365603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.365658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.365823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.365875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.366043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.366096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.366203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.366264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.366368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.366398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.366529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.366557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.366790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.366825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.366910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.366936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.367039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.367101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.367270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.367321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.367436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.367490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.367630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.367683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.367846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.367896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.367980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.368005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.368184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.368234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.368341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.368407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.368572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.368603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.368697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.368725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.368859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.368911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.369062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.369117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.369224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.369252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.369379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.369431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.106 qpair failed and we were unable to recover it. 00:41:04.106 [2024-07-11 02:46:54.369529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.106 [2024-07-11 02:46:54.369558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.369680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.369738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.369876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.369930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.370032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.370059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.370232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.370283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.370389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.370458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.370562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.370626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.370782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.370834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.370982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.371033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.371202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.371254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.371427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.371483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.371654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.371710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.371851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.371905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.372044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.372101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.372235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.372289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.372447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.372499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.372705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.372755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.372890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.372933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.373034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.373102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.373187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.373213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.373344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.373397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.373582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.373611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.373702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.373729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.373892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.373943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.374052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.374080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.374212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.374275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.374407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.374464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.374557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.374585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.374676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.374703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.374881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.374934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.375114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.107 [2024-07-11 02:46:54.375163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.107 qpair failed and we were unable to recover it. 00:41:04.107 [2024-07-11 02:46:54.375295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.375356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.375469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.375534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.375628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.375656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.375744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.375780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.375946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.375975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.376151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.376203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.376390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.376418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.376564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.376623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.376786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.376827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.376951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.377000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.377125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.377169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.377257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.377283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.377416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.377464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.377591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.377642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.377777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.377824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.377982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.378055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.378221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.378251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.378365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.378420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.378562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.378590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.378708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.378755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.378845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.378896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.379026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.379092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.379235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.379285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.108 [2024-07-11 02:46:54.379376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.108 [2024-07-11 02:46:54.379402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.108 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.379526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.379575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.379662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.379688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.379792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.379819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.379969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.380020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.380139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.380191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.380302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.380335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.380449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.380495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.380601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.380635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.380800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.380854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.380957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.380983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.381113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.381168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.381323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.381359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.381445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.381471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.381648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.381699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.381844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.381905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.382046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.382078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.382177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.382203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.382333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.382378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.382527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.382583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.382677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.382705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.382889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.382918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.383003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.383030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.383137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.383201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.383362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.383421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.383523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.383565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.383716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.383779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.383919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.383972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.384078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.384150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.384276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.384342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.384433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.109 [2024-07-11 02:46:54.384460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.109 qpair failed and we were unable to recover it. 00:41:04.109 [2024-07-11 02:46:54.384577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.384652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.384834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.384863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.385054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.385112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.385278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.385306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.385412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.385476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.385616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.385684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.385773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.385799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.385914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.385978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.386136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.386191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.386333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.386387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.386538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.386597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.386715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.386779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.386873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.386900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.386985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.387013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.387167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.387222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.387364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.387413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.387499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.387533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.387689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.387738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.387882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.387933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.388074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.388116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.388203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.388236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.388348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.388409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.388551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.388606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.388724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.388784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.388886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.388914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.389016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.389042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.389163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.389223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.389318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.389346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.389452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.389478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.389582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.110 [2024-07-11 02:46:54.389618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.110 qpair failed and we were unable to recover it. 00:41:04.110 [2024-07-11 02:46:54.389749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.389797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.389884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.389911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.390000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.390025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.390164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.390217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.390314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.390349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.390458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.390499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.390629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.390669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.390775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.390804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.390928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.390982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.391069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.391097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.391247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.391276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.391408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.391452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.391596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.391652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.391741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.391768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.391854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.391881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.391964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.391990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.392105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.392147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.392265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.392296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.392421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.392450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.392543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.392571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.392679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.392708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.392828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.392857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.392959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.393003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.393123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.393150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.393257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.393286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.393415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.393444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.393579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.393608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.394463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.394499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.394627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.394684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.394854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.394906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.111 [2024-07-11 02:46:54.395038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.111 [2024-07-11 02:46:54.395096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.111 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.395248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.395302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.395397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.395424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.395532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.395561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.395716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.395767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.395919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.395975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.396126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.396153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.396256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.396284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.396460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.396488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.396611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.396656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.396766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.396809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.396915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.396958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.397085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.397116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.397260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.397302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.397423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.397464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.397578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.397623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.397732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.397774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.397876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.397903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.398048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.398076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.398228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.398255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.398375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.398439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.398537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.398565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.398696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.398739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.398879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.398929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.399091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.399152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.399309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.399358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.399495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.399546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.399685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.399744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.399858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.399904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.400055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.400092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.400249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.400296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.400389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.112 [2024-07-11 02:46:54.400425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.112 qpair failed and we were unable to recover it. 00:41:04.112 [2024-07-11 02:46:54.400531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.400560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.400654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.400682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.400791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.400819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.400970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.401022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.401172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.401224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.401317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.401347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.401477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.401546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.401709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.401767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.401890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.401955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.402096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.402151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.402272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.402333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.402482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.402542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.402720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.402749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.402869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.402929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.403049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.403117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.403203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.403229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.403312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.403339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.403445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.403481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.403598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.403633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.403740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.403773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.403924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.403961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.404050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.404076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.404217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.404275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.404365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.404392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.404500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.404574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.404667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.404694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.404920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.404947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.405055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.405081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.405211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.405263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.405369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.405412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.405568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.405598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.405722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.405776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.405867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.113 [2024-07-11 02:46:54.405893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.113 qpair failed and we were unable to recover it. 00:41:04.113 [2024-07-11 02:46:54.405988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.406066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.406207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.406251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.406402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.406451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.406549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.406576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.406666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.406693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.406834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.406882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.406993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.407054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.407189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.407233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.407367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.407421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.407571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.407597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.407833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.407861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.407968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.408037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.408186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.408240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.408338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.408367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.408549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.408577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.408814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.408842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.408939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.408970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.409067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.409094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.409185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.409211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.409328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.409385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.409474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.409502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.409657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.409717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.409845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.409902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.409993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.410022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.410161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.410215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.410318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.114 [2024-07-11 02:46:54.410346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.114 qpair failed and we were unable to recover it. 00:41:04.114 [2024-07-11 02:46:54.410434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.410462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.410557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.410584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.410703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.410739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.410866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.410911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.411005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.411031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.411122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.411156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.411260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.411288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.411422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.411474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.411588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.411636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.411725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.411752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.411905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.411964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.412063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.412105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.412223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.412285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.412410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.412458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.412604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.412660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.412816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.412870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.412974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.413001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.413087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.413118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.413211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.413239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.413341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.413384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.413587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.413615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.413746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.413784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.413869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.413895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.414034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.414085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.414217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.414262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.414353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.414380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.414481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.414508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.414654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.414714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.414851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.414907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.415010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.415037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.415170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.415225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.115 [2024-07-11 02:46:54.415375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.115 [2024-07-11 02:46:54.415428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.115 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.415520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.415549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.415729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.415757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.415866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.415927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.416037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.416066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.416185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.416249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.416345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.416370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.416485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.416540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.416660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.416713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.416877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.416932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.417063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.417125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.417244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.417292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.417421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.417470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.417590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.417622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.417716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.417752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.417879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.417931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.418031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.418058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.418164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.418223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.418319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.418345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.418467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.418494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.418647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.418716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.418869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.418930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.419024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.419050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.419185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.419241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.419363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.419425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.419525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.419561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.419711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.419759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.419918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.419971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.420113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.420164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.420255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.116 [2024-07-11 02:46:54.420281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.116 qpair failed and we were unable to recover it. 00:41:04.116 [2024-07-11 02:46:54.420369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.420397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.420571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.420600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.420712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.420738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.420831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.420874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.420995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.421057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.421161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.421226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.421341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.421389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.421564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.421593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.421688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.421714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.421854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.421909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.422032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.422092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.422193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.422219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.422353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.422406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.422539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.422597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.422728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.422792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.422890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.422917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.423010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.423038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.423167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.423213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.423333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.423369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.423494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.423579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.423716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.423777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.424014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.424044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.424173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.424224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.424317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.424350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.424453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.424490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.424634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.424693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.424833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.424888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.425032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.425086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.425202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.425262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.425354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.425382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.425501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.425572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.425707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.117 [2024-07-11 02:46:54.425769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.117 qpair failed and we were unable to recover it. 00:41:04.117 [2024-07-11 02:46:54.425903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.425956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.426086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.426124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.426290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.426337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.426503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.426542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.426666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.426725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.426879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.426934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.427024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.427051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.427172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.427219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.427325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.427368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.427504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.427576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.427707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.427756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.427847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.427873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.427995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.428053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.428181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.428238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.428399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.428425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.428576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.428641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.428781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.428831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.428929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.428958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.429079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.429144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.429236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.429261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.429367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.429426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.429517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.429544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.429704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.429758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.429893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.429945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.430049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.430104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.430207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.430233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.430351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.430407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.430498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.430530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.430632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.430685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.430807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.430863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.118 [2024-07-11 02:46:54.430976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.118 [2024-07-11 02:46:54.431032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.118 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.431165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.431217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.431346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.431397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.431534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.431561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.431650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.431676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.431762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.431788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.431903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.431948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.432073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.432122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.432222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.432251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.432389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.432448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.432534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.432561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.432666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.432722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.432830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.432855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.432949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.432974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.433089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.433137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.433228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.433257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.433348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.433375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.433478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.433538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.433626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.433653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.433755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.433785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.433912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.433970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.434075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.434100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.434185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.434210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.434313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.434339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.434452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.434478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.434575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.434603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.434724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.434782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.434899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.434943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.435071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.435126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.435218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.119 [2024-07-11 02:46:54.435245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.119 qpair failed and we were unable to recover it. 00:41:04.119 [2024-07-11 02:46:54.435330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.435356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.435443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.435468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.435601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.435647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.435797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.435841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.435928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.435954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.436079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.436121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.436205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.436230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.436352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.436408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.436534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.436579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.436721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.436776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.436883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.436942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.437065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.437147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.437248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.437289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.437384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.437413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.437495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.437528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.437681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.437708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.437851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.437904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.438056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.438084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.438186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.438248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.438382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.438438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.438529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.438555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.438640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.438666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.438820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.438878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.120 [2024-07-11 02:46:54.439023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.120 [2024-07-11 02:46:54.439076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.120 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.439162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.439199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.439322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.439384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.439523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.439582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.439720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.439775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.439875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.439902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.439991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.440016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.440165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.440223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.440322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.440348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.440434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.440473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.440574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.440601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.440711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.440737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.440839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.440867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.440962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.440990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.441131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.441192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.441301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.441328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.441426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.441453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.441592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.441619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.441703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.441730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.441860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.441913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.441999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.442025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.442126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.442153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.442250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.442277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.442380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.442406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.442494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.442526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.442629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.442655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.442741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.442768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.442856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.442883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.442989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.443015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.443106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.443139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.443389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.443418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.121 [2024-07-11 02:46:54.443532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.121 [2024-07-11 02:46:54.443561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.121 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.443673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.443700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.443785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.443812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.443917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.443944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.444027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.444054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.444152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.444179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.444278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.444305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.444402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.444429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.444522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.444551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.444633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.444658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.444757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.444785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.444871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.122 [2024-07-11 02:46:54.444898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.122 qpair failed and we were unable to recover it. 00:41:04.122 [2024-07-11 02:46:54.444986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.407 [2024-07-11 02:46:54.445013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.407 qpair failed and we were unable to recover it. 00:41:04.407 [2024-07-11 02:46:54.445111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.407 [2024-07-11 02:46:54.445139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.407 qpair failed and we were unable to recover it. 00:41:04.407 [2024-07-11 02:46:54.445228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.407 [2024-07-11 02:46:54.445255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.407 qpair failed and we were unable to recover it. 00:41:04.407 [2024-07-11 02:46:54.445351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.407 [2024-07-11 02:46:54.445378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.407 qpair failed and we were unable to recover it. 00:41:04.407 [2024-07-11 02:46:54.445471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.407 [2024-07-11 02:46:54.445498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.407 qpair failed and we were unable to recover it. 00:41:04.407 [2024-07-11 02:46:54.445604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.407 [2024-07-11 02:46:54.445632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.407 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.445725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.445751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.445846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.445873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.445980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.446052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.446219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.446281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.446410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.446475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.446582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.446611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.446699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.446726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.446820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.446847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.446936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.446962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.447055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.447083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.447172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.447200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.447293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.447322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.447422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.447449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.447564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.447592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.447720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.447768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.447860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.447887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.447989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.448015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.448098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.448124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.448207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.448234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.448332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.448359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.448446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.448473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.448577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.448621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.448748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.448809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.448897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.448924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.449011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.449038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.449169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.449231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.449315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.449341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.449430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.449457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.449559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.449587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.449686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.449715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.449810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.449838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.449930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.449959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.450057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.450083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.450181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.450208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.450308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.450335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.450424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.450450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.450542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.450570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.450664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.408 [2024-07-11 02:46:54.450691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.408 qpair failed and we were unable to recover it. 00:41:04.408 [2024-07-11 02:46:54.450779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.450805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.450892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.450918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.451013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.451041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.451132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.451160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.451250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.451278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.451368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.451396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.451478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.451505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.451619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.451649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.451751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.451777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.451876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.451903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.451994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.452030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.452120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.452147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.452233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.452260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.452342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.452368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.452458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.452485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.452598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.452624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.452706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.452732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.452820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.452846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.452929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.452956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.453052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.453078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.453169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.453196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.453280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.453306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.453391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.453417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.453526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.453559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.453666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.453692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.453793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.453857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.453954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.453980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.454065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.454091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.454182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.454209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.454297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.454323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.454406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.454433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.454524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.454557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.454658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.409 [2024-07-11 02:46:54.454685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.409 qpair failed and we were unable to recover it. 00:41:04.409 [2024-07-11 02:46:54.454769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.454795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.454878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.454904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.454989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.455015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.455098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.455124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.455213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.455245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.455328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.455355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.455440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.455466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.455554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.455581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.455677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.455704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.455789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.455815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.455900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.455928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.456033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.456060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.456157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.456189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.456282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.456312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.456415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.456444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.456542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.456571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.456720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.456805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.456937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.456994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.457094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.457122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.457212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.457240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.457339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.457367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.457451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.457479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.457612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.457671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.457765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.457792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.457881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.457908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.457995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.458022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.458108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.458135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.458228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.458257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.458366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.458393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.458482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.458516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.458607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.458636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.458724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.458759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.458864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.458893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.458979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.459007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.459106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.459132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.459227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.459254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.410 [2024-07-11 02:46:54.459339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.410 [2024-07-11 02:46:54.459365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.410 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.459475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.459548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.459643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.459669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.459759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.459785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.459874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.459901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.459994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.460021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.460113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.460140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.460240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.460266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.460352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.460379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.460474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.460501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.460612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.460639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.460776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.460820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.460908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.460936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.461023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.461049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.461136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.461164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.461248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.461275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.461362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.461389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.461489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.461523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.461624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.461651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.461734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.461760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.461861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.461888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.462003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.462059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.462203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.462291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.462392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.462418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.462523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.462553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.462647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.462674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.462762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.462789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.462877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.462904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.462989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.463016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.463117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.463143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.463225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.463251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.463346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.463377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.463470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.463498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.463593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.463621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.463742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.463800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.411 [2024-07-11 02:46:54.463930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.411 [2024-07-11 02:46:54.463993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.411 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.464080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.464106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.464193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.464220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.464319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.464345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.464436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.464464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.464564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.464593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.464688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.464716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.464815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.464841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.464926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.464952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.465051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.465079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.465175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.465202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.465290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.465317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.465406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.465433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.465526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.465554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.465652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.465683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.465785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.465811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.465900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.465927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.466036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.466089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.466182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.466209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.466312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.466340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.466439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.466465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.466557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.466584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.466673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.466701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.466787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.466813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.466919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.466946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.467036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.467063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.467152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.467180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.467271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.467298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.467394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.467422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.467528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.467558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.467645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.467673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.412 qpair failed and we were unable to recover it. 00:41:04.412 [2024-07-11 02:46:54.467759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.412 [2024-07-11 02:46:54.467788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.467879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.467905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.468004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.468030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.468115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.468142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.468229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.468256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.468347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.468375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.468466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.468492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.468600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.468628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.468715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.468742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.468840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.468866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.468968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.468995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.469090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.469118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.469211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.469237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.469317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.469343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.469428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.469455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.469540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.469567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.469665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.469694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.469794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.469822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.469930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.469957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.470044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.470072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.470168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.470195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.470312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.470338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.470429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.470456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.470543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.470569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.470675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.470701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.470787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.470813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.470908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.470935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.471036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.471065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.471150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.471177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.471319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.471371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.471462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.471489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.471589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.471618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.413 qpair failed and we were unable to recover it. 00:41:04.413 [2024-07-11 02:46:54.471711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.413 [2024-07-11 02:46:54.471738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.471822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.471848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.471984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.472066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.472166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.472194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.472294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.472323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.472426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.472454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.472563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.472592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.472767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.472794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.472879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.472905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.473002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.473029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.473123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.473150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.473233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.473260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.473373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.473433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.473534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.473561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.473656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.473683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.473807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.473869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.473954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.473982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.474067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.474093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.474186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.474220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.474310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.474336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.474425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.474451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.474549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.474577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.474684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.474712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.474800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.474826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.474922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.474949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.475062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.475125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.475219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.475246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.475330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.475357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.475448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.475474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.475595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.475657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.475747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.475775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.475895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.475952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.476102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.476157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.476242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.476269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.476370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.476399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.476494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.476529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.476623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.476650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.476730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.476756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.476854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.476880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.476968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.476995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.414 qpair failed and we were unable to recover it. 00:41:04.414 [2024-07-11 02:46:54.477079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.414 [2024-07-11 02:46:54.477105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.477197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.477224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.477318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.477346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.477444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.477472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.477576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.477603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.477698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.477730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.477843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.477872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.477968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.477996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.478087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.478115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.478216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.478243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.478334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.478362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.478459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.478486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.478603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.478666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.478757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.478784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.478872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.478899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.479037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.479089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.479182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.479209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.479306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.479333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.479424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.479450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.479561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.479591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.479685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.479713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.479803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.479829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.479919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.479947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.480032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.480060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.480147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.480174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.480263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.480291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.480383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.480409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.480507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.480538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.480634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.480660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.480753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.480780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.480874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.480901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.481005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.481031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.481114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.481144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.481264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.481328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.481437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.481501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.481615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.481644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.481769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.481824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.481908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.481935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.482019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.482046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.482131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.482158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.482242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.415 [2024-07-11 02:46:54.482269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.415 qpair failed and we were unable to recover it. 00:41:04.415 [2024-07-11 02:46:54.482372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.482399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.482534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.482576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.482705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.482758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.482854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.482882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.482981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.483008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.483098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.483125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.483208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.483235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.483346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.483402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.483485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.483517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.483604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.483631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.483718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.483745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.483831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.483857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.483945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.483972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.484057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.484085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.484172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.484201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.484298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.484326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.484416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.484444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.484528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.484556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.484648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.484676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.484767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.484794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.484888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.484917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.485015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.485043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.485143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.485171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.485255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.485281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.485411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.485464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.485577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.485635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.485750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.485808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.485924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.485982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.486091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.486117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.486234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.486302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.486436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.486494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.486601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.486633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.486719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.486747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.486830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.486857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.486943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.486973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.487061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.487089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.487174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.487200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.487315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.416 [2024-07-11 02:46:54.487369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.416 qpair failed and we were unable to recover it. 00:41:04.416 [2024-07-11 02:46:54.487497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.487557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.487657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.487684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.487772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.487798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.487891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.487917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.488004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.488030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.488116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.488142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.488224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.488250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.488344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.488371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.488454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.488482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.488584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.488611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.488706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.488732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.488815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.488841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.488942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.488968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.489062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.489090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.489186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.489214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.489303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.489332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.489428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.489455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.489544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.489572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.489655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.489682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.489799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.489861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.489948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.489984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.490076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.490104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.490191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.490218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.490310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.490339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.490449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.490477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.490584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.490611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.490702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.490729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.490819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.490845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.490943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.490969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.491057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.491084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.491175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.491203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.491315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.491343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.491434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.491463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.491560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.417 [2024-07-11 02:46:54.491589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.417 qpair failed and we were unable to recover it. 00:41:04.417 [2024-07-11 02:46:54.491695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.491723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.491811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.491839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.491926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.491953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.492051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.492078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.492187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.492214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.492318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.492347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.492435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.492462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.492545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.492571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.492657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.492683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.492772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.492798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.492885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.492913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.493007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.493034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.493134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.493161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.493249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.493281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.493391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.493447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.493536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.493564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.493659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.493686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.493765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.493792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.493876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.493903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.493981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.494007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.494098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.494126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.494224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.494251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.494334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.494362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.494446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.494475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.494615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.494669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.494766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.494794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.494881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.494907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.494998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.495025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.495107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.495134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.495224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.495251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.495349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.495376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.495470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.495498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.495593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.495619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.495706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.495732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.495816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.495842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.495925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.495951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.496043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.496070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.496160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.496186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.496279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.496306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.496398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.496426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.496536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.418 [2024-07-11 02:46:54.496565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.418 qpair failed and we were unable to recover it. 00:41:04.418 [2024-07-11 02:46:54.496660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.496689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.496783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.496811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.496904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.496932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.497086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.497155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.497248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.497276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.497408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.497466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.497566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.497594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.497687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.497714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.497807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.497833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.497953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.498013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.498100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.498127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.498263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.498323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.498416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.498449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.498547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.498575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.498684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.498740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.498854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.498915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.499006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.499033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.499170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.499227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.499365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.499409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.499518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.499546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.499646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.499673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.499772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.499800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.499898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.499926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.500015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.500042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.500138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.500167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.500260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.500288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.500385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.500412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.500505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.500538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.500641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.500667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.500764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.500794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.500885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.500913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.501007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.501034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.501118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.501145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.501226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.501252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.501371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.501398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.501489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.501530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.501622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.501650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.501739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.501766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.501860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.501888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.501974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.502008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.502111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.419 [2024-07-11 02:46:54.502138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.419 qpair failed and we were unable to recover it. 00:41:04.419 [2024-07-11 02:46:54.502226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.502253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.502332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.502359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.502453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.502481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.502580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.502607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.502703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.502729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.502853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.502912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.502996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.503022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.503103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.503129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.503214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.503241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.503332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.503358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.503442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.503469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.503565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.503592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.503680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.503706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.503795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.503822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.503913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.503939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.504042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.504068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.504154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.504183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.504286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.504314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.504401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.504430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.504529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.504556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.504657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.504684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.504773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.504799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.504885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.504911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.505005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.505033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.505119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.505146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.505226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.505258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.505353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.505381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.505472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.505501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.505595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.505624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.505710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.505737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.505827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.505854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.505981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.506035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.506123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.506149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.506234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.506261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.506343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.506370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.506461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.506489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.506592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.506621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.506706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.506733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.506816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.506843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.506947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.506973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.420 qpair failed and we were unable to recover it. 00:41:04.420 [2024-07-11 02:46:54.507058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.420 [2024-07-11 02:46:54.507085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.507176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.507202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.507290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.507321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.507426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.507453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.507559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.507586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.507677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.507703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.507789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.507816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.507902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.507929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.508031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.508059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.508143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.508169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.508264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.508290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.508396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.508424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.508513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.508544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.508628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.508655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.508736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.508765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.508853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.508882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.508980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.509008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.509098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.509125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.509213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.509240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.509345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.509372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.509454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.509480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.509578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.509606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.509708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.509735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.509822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.509849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.509955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.509982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.510082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.510109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.510202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.510230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.510316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.510342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.510429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.510456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.510542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.510569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.510667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.510693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.510785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.510811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.510906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.510932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.511020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.511049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.511149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.511177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.421 qpair failed and we were unable to recover it. 00:41:04.421 [2024-07-11 02:46:54.511267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.421 [2024-07-11 02:46:54.511295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.511404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.511432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.511524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.511552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.511633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.511659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.511753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.511781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.511875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.511902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.512010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.512036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.512134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.512160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.512242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.512268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.512352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.512379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.512474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.512500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.512597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.512624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.512714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.512743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.512825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.512852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.512939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.512966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.513055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.513082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.513194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.513221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.513306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.513337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.513425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.513453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.513547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.513594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.513712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.513739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.513824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.513852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.513970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.514032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.514119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.514145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.514230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.514257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.514346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.514372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.514466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.514493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.514588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.514615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.514712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.514739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.514840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.514867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.514957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.514986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.515090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.515118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.515206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.515233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.515317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.515344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.515433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.515460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.515554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.515582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.515664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.515691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.515778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.515805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.515893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.515919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.515999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.516025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.516111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.422 [2024-07-11 02:46:54.516138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.422 qpair failed and we were unable to recover it. 00:41:04.422 [2024-07-11 02:46:54.516230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.516256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.516341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.516368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.516452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.516478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.516592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.516632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.516736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.516764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.516852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.516878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.516963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.516989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.517069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.517096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.517179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.517206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.517299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.517326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.517427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.517458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.517571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.517600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.517699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.517727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.517813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.517839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.517940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.517968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.518075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.518102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.518192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.518220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.518333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.518361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.518468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.518497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.518605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.518632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.518731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.518760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.518848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.518875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.518975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.519003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.519101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.519127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.519225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.519253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.519340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.519366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.519458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.519485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.519615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.519643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.519730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.519757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.519847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.519874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.519964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.520000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.520089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.520117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.520214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.520243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.520332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.520360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.520449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.520477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.520577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.520604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.520690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.520717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.520800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.520826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.520913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.520940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.521023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.521052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.423 [2024-07-11 02:46:54.521136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.423 [2024-07-11 02:46:54.521162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.423 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.521277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.521304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.521385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.521412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.521497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.521529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.521629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.521656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.521748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.521774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.521868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.521894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.521990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.522016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.522106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.522132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.522219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.522245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.522345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.522373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.522457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.522483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.522588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.522617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.522704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.522731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.522820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.522847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.522939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.522966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.523069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.523096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.523197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.523226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.523320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.523347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.523434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.523460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.523574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.523601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.523684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.523710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.523804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.523831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.523920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.523948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.524030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.524057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.524142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.524170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.524255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.524281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.524370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.524397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.524502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.524535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.524625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.524652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.524738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.524769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.524851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.524878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.524965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.524993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.525102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.525167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.525265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.525293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.525396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.525424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.525523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.525550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.525639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.525666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.424 [2024-07-11 02:46:54.525748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.424 [2024-07-11 02:46:54.525774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.424 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.525854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.525880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.525994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.526060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.526146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.526173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.526272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.526299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.526387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.526414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.526515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.526543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.526634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.526662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.526774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.526801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.526891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.526918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.527001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.527027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.527115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.527142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.527227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.527253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.527338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.527363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.527455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.527482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.527583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.527611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.527696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.527722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.527827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.527855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.527962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.527990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.528075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.528107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.528193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.528220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.528317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.528344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.528431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.528458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.528554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.528581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.528677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.528705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.528791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.528817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.528905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.528931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.529014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.529041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.529137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.529166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.529270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.529297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.529396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.529423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.529508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.529540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.529624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.529652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.529755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.529784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.529874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.529901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.529984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.530011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.530107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.530134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.530216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.530243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.530338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.425 [2024-07-11 02:46:54.530365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.425 qpair failed and we were unable to recover it. 00:41:04.425 [2024-07-11 02:46:54.530460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.530488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.530601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.530630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.530724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.530751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.530838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.530865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.530948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.530975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.531076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.531103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.531188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.531214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.531298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.531329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.531430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.531457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.531556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.531583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.531671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.531698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.531793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.531820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.531905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.531934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.532026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.532053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.532146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.532173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.532274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.532301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.532403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.532431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.532527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.532555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.532643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.532670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.532755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.532782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.532867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.532894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.532987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.533014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.533122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.533181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.533293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.533320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.533399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.533425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.533515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.533544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.533649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.533676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.533771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.533800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.534035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.534061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.534187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.534249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.534339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.534365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.534577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.534605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.534697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.534724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.534814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.534841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.534933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.426 [2024-07-11 02:46:54.534961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.426 qpair failed and we were unable to recover it. 00:41:04.426 [2024-07-11 02:46:54.535064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.535090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.535181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.535210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.535302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.535328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.535412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.535440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.535539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.535567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.535704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.535762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.535853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.535881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.535966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.535994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.536085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.536114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.536229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.536257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.536344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.536371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.536463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.536489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.536634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.536661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.536755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.536782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.536867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.536893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.537018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.537074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.537168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.537194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.537281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.537308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.537392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.537419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.537508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.537541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.537640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.537668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.537754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.537781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.537868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.537894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.537980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.538006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.538095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.538121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.538221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.538248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.538340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.538366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.538485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.538527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.538661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.538711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.538832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.538884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.538975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.539002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.539102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.539129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.539212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.539239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.539323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.539350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.539440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.539466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.539556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.539583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.539704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.539762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.539853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.539882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.539980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.540007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.427 [2024-07-11 02:46:54.540100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.427 [2024-07-11 02:46:54.540132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.427 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.540230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.540257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.540360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.540389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.540483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.540515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.540604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.540631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.540768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.540795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.540890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.540917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.541005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.541033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.541123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.541154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.541288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.541342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.541430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.541458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.541563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.541591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.541721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.541777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.541872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.541898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.542046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.542091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.542198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.542225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.542321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.542347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.542436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.542462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.542560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.542588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.542685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.542714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.542801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.542828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.542911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.542939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.543027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.543053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.543141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.543169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.543264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.543291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.543379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.543406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.543488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.543522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.543628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.543698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.543851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.543879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.544052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.544091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.544182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.544210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.544300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.544328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.544411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.544437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.544529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.544557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.544643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.544669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.544771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.544798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.544900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.544927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.545021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.545047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.545137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.545164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.545263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.428 [2024-07-11 02:46:54.545289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.428 qpair failed and we were unable to recover it. 00:41:04.428 [2024-07-11 02:46:54.545372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.545399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.545486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.545519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.545608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.545635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.545726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.545754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.545848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.545874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.545973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.546000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.546085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.546111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.546213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.546242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.546344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.546374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.546473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.546502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.546600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.546627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.546717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.546744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.546836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.546864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.546979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.547006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.547110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.547142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.547231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.547258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.547359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.547385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.547473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.547500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.547595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.547623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.547719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.547745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.547871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.547932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.548020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.548047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.548138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.548164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.548253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.548280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.548373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.548399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.548488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.548520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.548606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.548633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.548726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.548755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.548859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.548886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.548980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.549009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.549116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.549146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.549262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.549317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.549428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.549490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.549583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.549610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.549693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.549720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.549806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.549833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.549933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.549961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.550056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.550082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.550169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.550195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.550278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.550304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.550390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.429 [2024-07-11 02:46:54.550417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.429 qpair failed and we were unable to recover it. 00:41:04.429 [2024-07-11 02:46:54.550527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.550554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.550641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.550667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.550757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.550784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.550878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.550904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.551007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.551036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.551121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.551148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.551242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.551269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.551381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.551408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.551495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.551538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.551623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.551649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.551745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.551771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.551862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.551890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.551976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.552002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.552084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.552110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.552211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.552239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.552341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.552370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.552460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.552486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.552576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.552603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.552692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.552720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.552830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.552857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.552937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.552964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.553045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.553071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.553165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.553193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.553291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.553318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.553405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.553431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.553532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.553560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.553653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.553680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.553768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.553799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.553882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.553908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.553999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.554027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.554120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.554147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.554237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.554264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.554347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.554373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.554472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.554498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.430 [2024-07-11 02:46:54.554640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.430 [2024-07-11 02:46:54.554694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.430 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.554782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.554809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.554891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.554917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.555023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.555051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.555144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.555171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.555266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.555293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.555383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.555409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.555499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.555536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.555620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.555646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.555746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.555773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.555858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.555884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.555975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.556005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.556142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.556201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.556301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.556327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.556422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.556449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.556536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.556563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.556649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.556676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.556767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.556794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.556880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.556907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.557009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.557037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.557133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.557162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.557254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.557282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.557384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.557410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.557498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.557535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.557628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.557655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.557757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.557784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.557870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.557896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.557976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.558003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.558090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.558115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.558202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.558228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.558317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.558344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.558438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.558466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.558571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.558600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.558685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.558712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.558811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.558839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.558932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.558959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.559048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.559075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.559164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.559192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.559275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.559302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.559400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.431 [2024-07-11 02:46:54.559427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.431 qpair failed and we were unable to recover it. 00:41:04.431 [2024-07-11 02:46:54.559516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.559543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.559635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.559662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.559750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.559776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.559859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.559885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.559974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.560001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.560084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.560112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.560202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.560229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.560323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.560351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.560446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.560473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.560603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.560660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.560769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.560832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.560931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.560957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.561041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.561068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.561165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.561191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.561271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.561297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.561402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.561431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.561545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.561573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.561656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.561682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.561782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.561809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.561896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.561924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.562033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.562065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.562161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.562188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.562281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.562316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.562415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.562444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.562541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.562569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.562680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.562742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.562863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.562917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.563035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.563098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.563239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.563292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.563385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.563412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.563520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.563547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.563636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.563663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.563769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.563797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.563920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.563984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.564078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.564105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.564239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.564296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.564396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.564425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.564529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.564557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.564648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.564676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.564805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.564864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.564965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.432 [2024-07-11 02:46:54.564992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.432 qpair failed and we were unable to recover it. 00:41:04.432 [2024-07-11 02:46:54.565077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.565104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.565225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.565277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.565362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.565389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.565472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.565499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.565633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.565686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.565771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.565798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.565883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.565916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.566031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.566058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.566183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.566235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.566328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.566356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.566442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.566469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.566590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.566655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.566747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.566775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.566867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.566894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.566997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.567024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.567129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.567156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.567241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.567267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.567355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.567382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.567471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.567497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.567592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.567619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.567712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.567740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.567825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.567851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.567939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.567966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.568064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.568092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.568192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.568218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.568301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.568328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.568419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.568449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.568547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.568575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.568693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.568755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.568844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.568870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.568952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.568978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.569076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.569102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.569207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.569235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.569327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.569357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.569449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.569476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.569583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.569611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.569704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.569731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.569841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.569902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.570020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.570080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.570171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.570198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.433 [2024-07-11 02:46:54.570291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.433 [2024-07-11 02:46:54.570318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.433 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.570416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.570443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.570525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.570552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.570654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.570681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.570767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.570794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.570898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.570925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.571009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.571040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.571127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.571154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.571234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.571261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.571349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.571376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.571464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.571490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.571599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.571627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.571718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.571744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.571886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.571930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.572046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.572106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.572204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.572231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.572334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.572362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.572448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.572475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.572609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.572664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.572750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.572777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.572890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.572952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.573036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.573062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.573154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.573180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.573262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.573289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.573391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.573417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.573505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.573539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.573633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.573660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.573757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.573783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.573870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.573896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.573980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.574007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.574095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.574121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.574203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.574230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.574322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.574349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.574435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.574466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.574583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.574613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.434 [2024-07-11 02:46:54.574705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.434 [2024-07-11 02:46:54.574733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.434 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.574824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.574850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.574942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.574972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.575068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.575095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.575181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.575209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.575302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.575329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.575462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.575505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.575598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.575625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.575715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.575743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.575834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.575861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.575944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.575971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.576116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.576176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.576277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.576304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.576399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.576426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.576528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.576556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.576648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.576674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.576759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.576786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.576899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.576961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.577086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.577139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.577219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.577245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.577344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.577371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.577490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.577550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.577657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.577686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.577777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.577805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.577896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.577924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.578046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.578113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.578204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.578232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.578320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.578351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.578444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.578473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.578569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.578597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.578683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.578710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.578809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.578838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.578954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.579011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.579099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.579126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.579258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.579285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.579374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.579403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.579501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.579537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.579630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.579658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.579743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.579770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.579910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.579968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.580053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.580080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.435 [2024-07-11 02:46:54.580172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.435 [2024-07-11 02:46:54.580200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.435 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.580302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.580329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.580419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.580449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.580547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.580576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.580663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.580691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.580840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.580893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.581007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.581064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.581163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.581191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.581317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.581377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.581462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.581492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.581592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.581620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.581706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.581737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.581829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.581857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.581979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.582036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.582119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.582146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.582233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.582260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.582373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.582400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.582481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.582508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.582625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.582685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.582801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.582858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.582962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.583027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.583109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.583135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.583237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.583266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.583361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.583389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.583470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.583497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.583603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.583631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.583729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.583756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.583851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.583878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.583917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2274170 (9): Bad file descriptor 00:41:04.436 [2024-07-11 02:46:54.584025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.584055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.584150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.584179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.584278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.584305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.584438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.584465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.584562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.584590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.584678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.584705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.584798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.584825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.584934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.584962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.585051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.585077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.585162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.585188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.585273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.585298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.585383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.585409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.436 [2024-07-11 02:46:54.585493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.436 [2024-07-11 02:46:54.585527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.436 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.585616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.585642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.585732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.585757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.585859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.585886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.585968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.585994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.586071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.586096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.586173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.586199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.586282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.586309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.586396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.586423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.586547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.586574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.586657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.586683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.586771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.586801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.586882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.586908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.586989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.587015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.587096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.587122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.587202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.587228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.587312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.587338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.587419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.587445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.587527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.587553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.587655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.587681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.587782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.587808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.587893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.587919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.588008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.588035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.588128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.588154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.588241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.588268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.588358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.588384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.588474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.588500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.588593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.588619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.588701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.588726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.588829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.588854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.588955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.588981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.589069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.589094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.589206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.589232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.589321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.589347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.589433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.589459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.589547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.589574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.589663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.589689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.589777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.589802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.589886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.589916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.590012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.590044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.590139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.590168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.437 [2024-07-11 02:46:54.590259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.437 [2024-07-11 02:46:54.590286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.437 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.590372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.590397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.590478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.590508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.590607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.590633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.590729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.590756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.590843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.590868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.590954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.590980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.591065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.591090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.591170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.591195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.591286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.591312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.591396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.591423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.591515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.591543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.591634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.591661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.591750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.591776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.591867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.591895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.591980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.592006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.592095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.592122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.592214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.592244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.592336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.592364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.592456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.592484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.592577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.592604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.592705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.592733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.592826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.592853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.592947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.592975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.593073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.593103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.593192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.593221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.593316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.593344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.593447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.593475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.593576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.593603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.593696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.593724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.593823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.593851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.593939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.593968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.594054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.594081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.594175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.594202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.594286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.594313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.594401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.594428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.594508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.594553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.594646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.594673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.594789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.594818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.594908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.594935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.595024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.595053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.438 [2024-07-11 02:46:54.595147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.438 [2024-07-11 02:46:54.595174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.438 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.595271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.595298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.595383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.595410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.595502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.595537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.595627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.595654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.595744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.595771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.595876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.595905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.596006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.596033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.596127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.596156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.596261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.596291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.596405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.596434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.596538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.596566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.596658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.596685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.596809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.596850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.596973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.597029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.597126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.597169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.597278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.597320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.597431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.597472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.597590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.597621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.597725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.597753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.597835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.597862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.597958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.597985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.598075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.598101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.598227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.598285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.598388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.439 [2024-07-11 02:46:54.598417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.439 qpair failed and we were unable to recover it. 00:41:04.439 [2024-07-11 02:46:54.598526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.598553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.598654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.598683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.598813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.598867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.598966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.599008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.599107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.599138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.599246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.599274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.599377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.599405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.599504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.599541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.599628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.599655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.599747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.599774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.599863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.599890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.599975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.600001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.600090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.600117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.600208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.600237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.600328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.600357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.600446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.600474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.600590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.600619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.600731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.600760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.600861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.600889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.600979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.601007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.601095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.601122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.601219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.601260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.601357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.601385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.601479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.601505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.601607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.601647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.601744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.601780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.601889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.601918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.602031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.602059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.602186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.602229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.602337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.602369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.602486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.602540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.602650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.602712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.602796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.602824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.602911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.602938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.603027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.603055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.603146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.603175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.603265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.603292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.603381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.603408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.603498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.603534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.603627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.603654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.440 [2024-07-11 02:46:54.603758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.440 [2024-07-11 02:46:54.603787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.440 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.603889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.603917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.604008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.604035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.604133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.604162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.604253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.604279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.604374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.604401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.604499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.604536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.604655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.604683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.604788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.604815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.604899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.604926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.605029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.605056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.605166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.605195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.605299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.605332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.605417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.605444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.605568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.605610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.605702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.605728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.605826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.605855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.605964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.605993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.606092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.606119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.606210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.606237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.606319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.606346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.606429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.606457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.606546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.606576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.606676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.606703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.606803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.606834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.606953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.606997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.607096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.607125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.607207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.607234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.607314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.607341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.607423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.607450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.607541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.607569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.607666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.607694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.607788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.607815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.607904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.607933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.608026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.608054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.608139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.608167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.608251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.608278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.608373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.608400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.608488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.608528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.608624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.608651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.608736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.608765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.441 qpair failed and we were unable to recover it. 00:41:04.441 [2024-07-11 02:46:54.608850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.441 [2024-07-11 02:46:54.608877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.608968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.608995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.609086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.609112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.609199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.609227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.609328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.609356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.609453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.609480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.609578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.609606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.609703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.609729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.609820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.609848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.609935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.609962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.610046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.610075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.610166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.610193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.610295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.610324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.610410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.610438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.610533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.610562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.610644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.610671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.610759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.610785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.610877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.610903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.610997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.611025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.611118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.611146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.611241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.611269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.611381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.611411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.611524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.611552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.611658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.611687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.611796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.611823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.611911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.611938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.612035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.612063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.612173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.612200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.612315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.612344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.612452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.612479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.612595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.612622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.612710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.612737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.612845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.612889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.612979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.613006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.613099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.613127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.613217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.613244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.613326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.613353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.613437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.613466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.613563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.613595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.613678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.613705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.613797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.442 [2024-07-11 02:46:54.613826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.442 qpair failed and we were unable to recover it. 00:41:04.442 [2024-07-11 02:46:54.613924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.613951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.614078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.614119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.614217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.614245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.614358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.614387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.614484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.614517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.614608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.614633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.614728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.614755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.614860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.614887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.614990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.615016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.615103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.615129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.615218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.615244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.615331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.615358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.615458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.615485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.615585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.615614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.615704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.615730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.615814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.615842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.615929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.615957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.616044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.616070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.616157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.616184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.616273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.616300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.616389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.616418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.616519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.616547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.616638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.616666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.616753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.616780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.616883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.616913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.617017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.617044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.617139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.617166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.617278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.617306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.617406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.617432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.617530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.617557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.617671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.617712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.617805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.617832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.617924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.617954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.618041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.618067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.618155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.618181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.443 qpair failed and we were unable to recover it. 00:41:04.443 [2024-07-11 02:46:54.618278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.443 [2024-07-11 02:46:54.618305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.618412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.618441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.618536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.618567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.618656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.618683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.618767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.618793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.618881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.618907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.618989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.619015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.619105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.619131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.619219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.619245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.619334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.619360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.619447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.619476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.619579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.619607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.619694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.619720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.619815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.619842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.619933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.619960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.620089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.620129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.620220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.620248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.620342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.620370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.620480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.620507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.620654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.620683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.620810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.620851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.620946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.620976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.621104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.621167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.621271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.621311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.621406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.621434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.621521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.621548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.621656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.621684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.621781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.621808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.621909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.621938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.622034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.622066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.622161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.622188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.622282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.622310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.622404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.622431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.622524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.622552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.622634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.622661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.622746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.622772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.622859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.622885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.622976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.623005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.623092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.623120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.623215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.623241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.623335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.623362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.444 qpair failed and we were unable to recover it. 00:41:04.444 [2024-07-11 02:46:54.623459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.444 [2024-07-11 02:46:54.623486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.623588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.623618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.623711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.623738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.623825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.623851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.623950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.623979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.624103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.624143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.624237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.624263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.624360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.624387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.624503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.624537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.624634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.624660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.624770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.624798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.624901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.624930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.625027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.625053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.625137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.625164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.625260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.625288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.625385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.625416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.625538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.625566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.625679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.625706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.625802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.625828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.625919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.625945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.626045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.626072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.626197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.626224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.626315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.626341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.626420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.626447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.626543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.626570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.626658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.626685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.626767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.626793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.626878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.626903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.626982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.627009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.627100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.627127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.627243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.627270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.627387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.627415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.627518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.627548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.627651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.627679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.627861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.627927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.628079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.628140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.628226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.628252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.628353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.628381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.628485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.628516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.628618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.628646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.628764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.445 [2024-07-11 02:46:54.628792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.445 qpair failed and we were unable to recover it. 00:41:04.445 [2024-07-11 02:46:54.628942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.628982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.629079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.629110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.629213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.629243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.629335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.629361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.629455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.629482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.629627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.629669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.629758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.629785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.629878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.629906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.629990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.630018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.630110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.630139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.630256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.630283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.630400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.630427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.630508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.630542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.630683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.630742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.630836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.630862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.630961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.630987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.631074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.631101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.631198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.631225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.631323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.631349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.631441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.631467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.631562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.631592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.631679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.631707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.631796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.631825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.631910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.631937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.632035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.632065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.632172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.632200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.632317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.632345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.632453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.632494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.632608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.632634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.632720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.632746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.632850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.632879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.632993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.633020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.633124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.633153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.633249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.633275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.633362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.633389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.633480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.633514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.633627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.633666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.633757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.633784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.633884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.633910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.633999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.634026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.446 [2024-07-11 02:46:54.634107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.446 [2024-07-11 02:46:54.634134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.446 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.634243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.634269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.634375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.634401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.634488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.634523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.634612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.634641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.634728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.634755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.634839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.634865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.634948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.634975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.635061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.635088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.635175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.635201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.635292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.635318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.635408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.635434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.635522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.635548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.635640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.635668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.635761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.635787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.635880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.635906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.635994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.636022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.636115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.636145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.636240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.636268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.636354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.636381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.636475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.636503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.636597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.636624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.636718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.636746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.636845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.636873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.636955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.636981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.637071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.637097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.637188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.637213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.637302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.637329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.637416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.637448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.637548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.637575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.637662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.637687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.637772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.637798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.637885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.637912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.637992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.638019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.638107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.638137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.638231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.638260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.447 [2024-07-11 02:46:54.638352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.447 [2024-07-11 02:46:54.638377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.447 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.638467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.638494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.638602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.638629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.638724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.638750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.638841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.638867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.638949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.638975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.639069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.639095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.639182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.639210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.639297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.639323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.639410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.639437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.639536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.639563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.639655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.639681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.639766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.639792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.639886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.639911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.639994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.640020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.640109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.640134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.640227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.640252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.640350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.640379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.640468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.640495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.640596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.640629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.640714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.640741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.640875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.640901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.640988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.641015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.641109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.641138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.641223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.641250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.641339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.641365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.641450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.641476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.641562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.641589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.641676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.641702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.641784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.641809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.641897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.641922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.642008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.642034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.642130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.642157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.642252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.642278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.642366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.642391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.642471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.642497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.642598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.642624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.642735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.642760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.642845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.642870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.642953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.642979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.448 [2024-07-11 02:46:54.643060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.448 [2024-07-11 02:46:54.643085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.448 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.643175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.643200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.643287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.643312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.643391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.643417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.643517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.643547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.643645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.643673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.643764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.643797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.643883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.643910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.643997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.644024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.644125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.644155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.644253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.644281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.644372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.644400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.644481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.644508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.644613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.644640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.644727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.644753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.644841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.644868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.644957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.644983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.645065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.645091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.645183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.645209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.645308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.645337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.645431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.645458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.645551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.645580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.645671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.645698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.645790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.645817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.645915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.645943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.646027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.646054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.646142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.646170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.646267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.646295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.646387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.646413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.646506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.646537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.646640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.646666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.646753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.646781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.646878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.646905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.646997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.647025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.647120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.647146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.647232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.647259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.647355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.647382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.647469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.647495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.647596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.647623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.647713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.647741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.647832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.647858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.449 qpair failed and we were unable to recover it. 00:41:04.449 [2024-07-11 02:46:54.648019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.449 [2024-07-11 02:46:54.648067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.648176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.648219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.648322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.648350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.648441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.648467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.648575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.648603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.648689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.648720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.648866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.648928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.649019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.649048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.649173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.649232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.649322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.649349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.649442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.649470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.649574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.649602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.649703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.649729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.649810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.649836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.649920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.649947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.650034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.650060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.650151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.650177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.650261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.650287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.650384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.650411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.650499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.650537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.650625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.650651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.650742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.650769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.650858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.650885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.650972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.650998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.651085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.651112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.651197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.651224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.651319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.651346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.651438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.651465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.651565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.651594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.651681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.651708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.651808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.651836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.651924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.651950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.652055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.652123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.652217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.652244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.652382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.652409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.652492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.652528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.652626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.652654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.652764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.652804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.652906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.652933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.653025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.653052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.653140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.450 [2024-07-11 02:46:54.653167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.450 qpair failed and we were unable to recover it. 00:41:04.450 [2024-07-11 02:46:54.653260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.653287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.653373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.653400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.653495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.653530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.653623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.653651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.653736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.653762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.653865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.653892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.653980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.654008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.654095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.654122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.654212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.654242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.654333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.654361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.654447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.654475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.654573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.654601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.654686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.654712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.654802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.654830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.654921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.654949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.655036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.655063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.655152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.655180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.655266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.655295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.655388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.655418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.655507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.655547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.655632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.655661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.655748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.655775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.655866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.655892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.655982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.656010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.656105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.656131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.656219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.656245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.656336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.656363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.656450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.656479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.656574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.656602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.656688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.656714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.656798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.656824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.656916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.656943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.657061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.657090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.657174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.657200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.657296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.657324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.657411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.657439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.657538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.657565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.657666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.657693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.657801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.657828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.657918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.657944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.658028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.451 [2024-07-11 02:46:54.658059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.451 qpair failed and we were unable to recover it. 00:41:04.451 [2024-07-11 02:46:54.658149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.658176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.658262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.658288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.658373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.658399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.658488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.658523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.658626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.658653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.658740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.658767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.658852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.658879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.658970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.659000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.659091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.659118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.659210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.659236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.659323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.659353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.659442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.659470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.659597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.659624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.659724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.659751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.659837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.659864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.659953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.659982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.660072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.660099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.660185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.660216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.660302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.660328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.660408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.660435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.660527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.660555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.660641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.660668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.660760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.660790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.660878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.660903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.660991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.661018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.661111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.661137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.661218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.661245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.661325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.661352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.661437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.661463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.661557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.452 [2024-07-11 02:46:54.661585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.452 qpair failed and we were unable to recover it. 00:41:04.452 [2024-07-11 02:46:54.661676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.661705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.661798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.661826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.661916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.661943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.662040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.662067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.662154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.662181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.662409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.662464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.662588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.662654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.662741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.662770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.662861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.662889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.662976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.663004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.663104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.663132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.663223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.663251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.663454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.663483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.663597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.663626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.663738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.663766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.663847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.663874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.663968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.663996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.664084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.664111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.664196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.664222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.664314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.664345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.664428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.664455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.664552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.664581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.664672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.664701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.664796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.664825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.664924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.664951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.665061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.665088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.665176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.665202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.453 [2024-07-11 02:46:54.665288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.453 [2024-07-11 02:46:54.665319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.453 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.665404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.665432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.665537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.665574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.665658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.665685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.665771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.665798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.665903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.665929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.666042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.666080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.666188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.666215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.666303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.666329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.666415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.666441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.666536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.666565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.666651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.666678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.666765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.666791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.666877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.666904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.666998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.667025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.667113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.667141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.667225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.667251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.667334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.667360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.667452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.667481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.667591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.667621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.667714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.667743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.667827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.667854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.668053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.668079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.668163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.668190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.668287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.668316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.668409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.668436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.668525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.668555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.668646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.668678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.668786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.668814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.668909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.668936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.669029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.669058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.669163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.669191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.669280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.669307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.669399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.669427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.669519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.454 [2024-07-11 02:46:54.669547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.454 qpair failed and we were unable to recover it. 00:41:04.454 [2024-07-11 02:46:54.669634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.669660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.669748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.669777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.669873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.669900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.670000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.670027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.670113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.670141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.670223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.670250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.670344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.670371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.670454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.670481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.670576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.670603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.670689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.670716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.670823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.670851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.670946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.670976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.671062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.671089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.671177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.671205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.671295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.671322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.671405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.671433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.671521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.671556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.671667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.671694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.671784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.671811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.671902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.671931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.672032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.672059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.672153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.672180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.672268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.672296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.672386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.672412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.672492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.672528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.672622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.672648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.672733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.672760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.672844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.672870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.672953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.672982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.673067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.673096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.673179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.673206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.673298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.455 [2024-07-11 02:46:54.673325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.455 qpair failed and we were unable to recover it. 00:41:04.455 [2024-07-11 02:46:54.673426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.673458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.673549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.673577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.673682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.673709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.673793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.673820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.673910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.673938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.674024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.674051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.674134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.674161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.674243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.674270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.674375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.674402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.674484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.674515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.674620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.674646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.674739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.674765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.674846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.674872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.674956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.674983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.675082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.675108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.675201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.675228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.675330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.675357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.675451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.675480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.675587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.675616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.675706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.675733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.675815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.675842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.675936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.675965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.676070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.676097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.676179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.676207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.676292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.676318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.676410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.676437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.676537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.676564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.676652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.676682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.676773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.676800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.676883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.676909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.456 [2024-07-11 02:46:54.676993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.456 [2024-07-11 02:46:54.677020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.456 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.677110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.677139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.677234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.677260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.677355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.677384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.677476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.677502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.677607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.677637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.677726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.677754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.677844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.677872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.677959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.677986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.678073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.678100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.678194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.678220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.678318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.678346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.678434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.678460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.678556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.678583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.678668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.678695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.678784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.678812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.678904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.678931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.679016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.679043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.679143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.679169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.679268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.679295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.679382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.679409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.679497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.679532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.679628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.679656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.679746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.679775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.679860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.679891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.679988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.680016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.457 qpair failed and we were unable to recover it. 00:41:04.457 [2024-07-11 02:46:54.680107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.457 [2024-07-11 02:46:54.680134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.680215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.680242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.680335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.680362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.680443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.680470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.680569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.680597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.680681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.680708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.680804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.680830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.680919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.680948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.681040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.681068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.681152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.681178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.681262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.681289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.681391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.681417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.681521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.681548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.681637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.681663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.681751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.681778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.681868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.681894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.681974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.682000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.682084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.682110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.682193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.682219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.682317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.682345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.682438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.682465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.682571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.682598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.682682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.682708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.682799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.682826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.682914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.682942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.683028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.683056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.683142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.683169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.683258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.683284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.683369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.683394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.683477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.683503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.683601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.683630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.683717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.458 [2024-07-11 02:46:54.683744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.458 qpair failed and we were unable to recover it. 00:41:04.458 [2024-07-11 02:46:54.683834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.683859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.683963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.683993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.684094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.684122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.684205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.684231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.684322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.684348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.684437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.684463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.684570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.684598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.684696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.684723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.684865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.684907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.684999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.685026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.685111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.685140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.685239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.685266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.685364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.685391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.685475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.685502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.685597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.685623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.685706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.685733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.685821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.685848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.685943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.685973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.686069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.686098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.686187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.686214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.686310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.686338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.686448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.686474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.686564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.686591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.686683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.686709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.686805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.686832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.686923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.686950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.687034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.687062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.687155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.687181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.687266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.459 [2024-07-11 02:46:54.687295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.459 qpair failed and we were unable to recover it. 00:41:04.459 [2024-07-11 02:46:54.687383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.687410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.687498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.687532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.687631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.687658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.687749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.687777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.687865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.687892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.687989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.688015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.688103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.688131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.688219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.688246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.688345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.688374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.688463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.688489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.688583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.688609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.688699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.688727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.688812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.688838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.688928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.688953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.689056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.689083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.689174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.689199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.689292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.689319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.689421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.689448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.689549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.689579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.689668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.689695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.689789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.689818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.689906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.689933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.690040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.690066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.690159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.690188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.690280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.690307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.690393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.690419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.690523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.690550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.690635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.690661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.690747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.690772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.690861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.690887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.690979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.691005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.691087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.691117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.460 [2024-07-11 02:46:54.691202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.460 [2024-07-11 02:46:54.691227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.460 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.691320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.691349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.691432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.691459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.691543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.691570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.691654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.691681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.691773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.691803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.691892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.691919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.692008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.692035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.692133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.692160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.692243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.692270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.692353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.692379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.692461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.692486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.692602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.692632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.692746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.692774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.692867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.692894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.692983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.693010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.693095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.693122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.693214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.693241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.693327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.693354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.693438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.693465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.693571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.693599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.693686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.693715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.693828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.693857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.693945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.693971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.694058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.694087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.694192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.694219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.694307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.694337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.694423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.694450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.694546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.694574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.694667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.694695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.694785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.694811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.694894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.694922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.695009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.461 [2024-07-11 02:46:54.695035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.461 qpair failed and we were unable to recover it. 00:41:04.461 [2024-07-11 02:46:54.695119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.695145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.695236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.695264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.695350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.695377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.695464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.695490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.695587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.695614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.695696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.695723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.695830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.695856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.695949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.695976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.696068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.696095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.696184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.696214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.696307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.696336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.696427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.696453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.696535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.696562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.696662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.696687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.696788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.696816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.696922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.696950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.697046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.697075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.697164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.697193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.697291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.697319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.697412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.697439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.697531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.697559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.697653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.697680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.697790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.697819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.697903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.697930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.698018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.698046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.698136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.698162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.698248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.698275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.698365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.698392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.698476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.698503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.698608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.698634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.698721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.698748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.698838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.462 [2024-07-11 02:46:54.698864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.462 qpair failed and we were unable to recover it. 00:41:04.462 [2024-07-11 02:46:54.698951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.698978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.699066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.699093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.699197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.699222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.699306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.699332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.699418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.699443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.699542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.699568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.699656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.699682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.699766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.699793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.699881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.699906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.699987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.700013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.700108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.700133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.700225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.700258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.700356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.700382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.700478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.700504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.700608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.700635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.700748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.700786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.700898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.700927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.701023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.701051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.701136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.701162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.701255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.701283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.701372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.701400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.701495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.701542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.701637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.701664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.701754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.701781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.701869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.701897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.701991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.702017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.702118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.702146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.702235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.702264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.463 [2024-07-11 02:46:54.702362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.463 [2024-07-11 02:46:54.702395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.463 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.702490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.702536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.702646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.702672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.702761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.702790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.702873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.702898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.702990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.703017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.703105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.703132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.703216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.703243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.703333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.703359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.703445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.703471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.703568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.703595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.703717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.703742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.703833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.703860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.703948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.703975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.704074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.704101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.704219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.704244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.704342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.704404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.704528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.704557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.704690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.704726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.704823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.704849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.704965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.704991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.705080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.705106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.705216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.705253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.705366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.705394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.705481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.705507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.705606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.705634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.705721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.705747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.705859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.705891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.706035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.706075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.706179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.706204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.706294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.706320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.706403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.706429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.706549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.464 [2024-07-11 02:46:54.706576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.464 qpair failed and we were unable to recover it. 00:41:04.464 [2024-07-11 02:46:54.706669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.706694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.706783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.706810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.706931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.706976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.707065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.707091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.707199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.707229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.707319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.707346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.707435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.707462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.707555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.707582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.707671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.707697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.707789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.707816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.707911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.707939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.708031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.708060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.708158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.708184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.708277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.708304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.708391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.708417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.708507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.708542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.708640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.708668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.708753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.708781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.708871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.708896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.708986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.709014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.709100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.709125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.709219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.709249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.709345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.709371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.709467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.709494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.709594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.709619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.709715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.709742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.709829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.709855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.709939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.709968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.710051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.710078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.710167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.710192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.710272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.710299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.710388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.710413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.710518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.465 [2024-07-11 02:46:54.710554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.465 qpair failed and we were unable to recover it. 00:41:04.465 [2024-07-11 02:46:54.710637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.710661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.710751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.710778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.710866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.710893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.710983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.711009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.711096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.711125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.711212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.711239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.711326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.711353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.711438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.711465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.711553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.711579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.711664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.711690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.711812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.711840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.711934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.711960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.712041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.712067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.712158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.712184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.712271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.712297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.712382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.712411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.712502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.712535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.712620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.712646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.712740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.712766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.712855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.712884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.712975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.713002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.713086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.713112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.713203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.713229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.713337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.713363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.713452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.713479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.713583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.713612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.713703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.713729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.713817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.713847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.713935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.713960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.714055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.714082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.714176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.714202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.466 [2024-07-11 02:46:54.714295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.466 [2024-07-11 02:46:54.714323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.466 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.714420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.714447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.714546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.714573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.714675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.714702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.714800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.714828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.714919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.714945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.715038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.715065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.715175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.715203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.715292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.715319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.715404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.715432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.715529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.715557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.715644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.715674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.715766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.715792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.715910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.715937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.716047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.716081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.716186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.716214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.716307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.716333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.716424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.716452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.716545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.716571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.716663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.716690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.716772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.716797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.716906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.716932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.717033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.717058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.717144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.717171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.717268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.717296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.717389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.717416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.717519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.717547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.717640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.717667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.717765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.717794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.717887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.717915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.718005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.718031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.718124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.718151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.718233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.718260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.718346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.718373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.467 [2024-07-11 02:46:54.718468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.467 [2024-07-11 02:46:54.718495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.467 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.718605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.718645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.718745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.718773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.718864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.718890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.718988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.719017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.719117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.719150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.719256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.719284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.719376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.719403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.719530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.719577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.719679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.719707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.719795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.719822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.719906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.719933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.720043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.720070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.720163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.720190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.720274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.720300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.720386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.720434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.720539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.720568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.720670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.720710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.720798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.720825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.720923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.720952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.721038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.721063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.721164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.721191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.721277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.721303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.721389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.721416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.721501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.721533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.468 qpair failed and we were unable to recover it. 00:41:04.468 [2024-07-11 02:46:54.721632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.468 [2024-07-11 02:46:54.721659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.721762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.721794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.721893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.721918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.722004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.722030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.722136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.722168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.722273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.722300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.722398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.722424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.722516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.722542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.722636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.722662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.722867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.722897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.722982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.723012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.723118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.723145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.723235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.723262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.723355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.723382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.723476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.723504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.723604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.723631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.723719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.723745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.723850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.723877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.723965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.723993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.724105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.724162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.724257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.724284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.724396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.724423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.724518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.724546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.724645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.724672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.724760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.724786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.724878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.724904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.724990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.725017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.725104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.725148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.725242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.725269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.725370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.725397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.725483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.725508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.725619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.725646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.725733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.725759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.725852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.725880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.725972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.725997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.726090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.469 [2024-07-11 02:46:54.726117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.469 qpair failed and we were unable to recover it. 00:41:04.469 [2024-07-11 02:46:54.726206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.726235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.726334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.726364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.726468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.726495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.726704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.726730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.726820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.726847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.726931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.726958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.727048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.727074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.727154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.727180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.727277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.727303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.727395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.727423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.727525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.727554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.727669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.727697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.727792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.727820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.727906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.727932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.728024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.728051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.728141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.728167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.728254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.728281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.728380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.728407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.728495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.728532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.728627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.728653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.728745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.728772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.728858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.728885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.728975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.729001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.729092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.729124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.729215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.729243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.729335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.729364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.729462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.729488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.729593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.729619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.729699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.729724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.729814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.729840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.729925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.729951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.730045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.730078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.730175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.730203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.730290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.470 [2024-07-11 02:46:54.730317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.470 qpair failed and we were unable to recover it. 00:41:04.470 [2024-07-11 02:46:54.730402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.730428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.730523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.730551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.730654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.730684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.730783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.730810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.730902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.730929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.731018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.731044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.731127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.731154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.731245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.731272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.731362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.731389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.731482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.731516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.731617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.731644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.731734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.731761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.731862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.731889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.731974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.732000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.732087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.732113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.732207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.732234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.732323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.732354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.732444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.732471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.732579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.732609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.732702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.732730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.732824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.732851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.732943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.732969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.733076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.733103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.733198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.733225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.733313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.733341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.733426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.733452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.733533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.733560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.733648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.733674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.733769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.733795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.733880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.733906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.733995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.734020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.471 [2024-07-11 02:46:54.734119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.471 [2024-07-11 02:46:54.734144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.471 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.734233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.734259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.734348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.734376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.734460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.734486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.734604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.734633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.734744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.734773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.734860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.734887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.734978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.735007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.735094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.735122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.735235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.735278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.735365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.735391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.735481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.735517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.735616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.735650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.735735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.735761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.735854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.735880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.735979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.736006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.736091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.736118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.736205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.736233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.736325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.736351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.736438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.736465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.736571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.736597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.736686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.736714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.736815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.736847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.736967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.736999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.737104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.737131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.737224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.737252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.737345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.737371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.737455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.737482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.737591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.737617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.737712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.737739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.737835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.737862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.737955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.472 [2024-07-11 02:46:54.737983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.472 qpair failed and we were unable to recover it. 00:41:04.472 [2024-07-11 02:46:54.738076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.738106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.738194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.738220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.738316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.738343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.738429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.738456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.738677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.738704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.738786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.738812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.738897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.738924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.739045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.739073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.739167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.739195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.739290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.739319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.739410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.739436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.739521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.739548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.739681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.739739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.739832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.739858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.739951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.739976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.740069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.740098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.740205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.740234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.740343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.740372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.740465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.740492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.740590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.740618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.740705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.740732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.740827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.740854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.740944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.740970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.741053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.741080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.741159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.741184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.741273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.741300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.741392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.741417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.741504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.741537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.741633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.741659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.741766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.741793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.473 [2024-07-11 02:46:54.741885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.473 [2024-07-11 02:46:54.741911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.473 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.741999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.742025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.742110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.742135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.742242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.742269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.742377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.742403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.742492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.742526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.742618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.742644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.742734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.742761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.742849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.742874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.742959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.742986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.743078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.743105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.743194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.743224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.743314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.743342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.743438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.743465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.743566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.743593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.743683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.743709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.743800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.743827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.743913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.743941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.744034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.744060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.744153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.744181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.744265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.744294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.744376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.744401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.744499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.744531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.744617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.744643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.744724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.744749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.744849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.744875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.744969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.744994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.745074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.474 [2024-07-11 02:46:54.745099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.474 qpair failed and we were unable to recover it. 00:41:04.474 [2024-07-11 02:46:54.745189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.745216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.745316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.745345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.745443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.745475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.745649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.745676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.745773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.745800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.745885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.745911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.746000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.746026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.746155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.746185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.746290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.746318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.746416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.746442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.746534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.746562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.746659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.746685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.746772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.746798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.746893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.746919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.747003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.747029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.747119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.747145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.747233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.747259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.747356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.747385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.747479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.747507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.747658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.747701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.747791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.747819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.747913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.747941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.748027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.748055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.748148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.748174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.748269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.748296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.748387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.748414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.748508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.748543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.748640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.748666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.748753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.748780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.748863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.748889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.475 [2024-07-11 02:46:54.748980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.475 [2024-07-11 02:46:54.749008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.475 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.749108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.749134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.749225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.749253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.749339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.749364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.749451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.749477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.749563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.749589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.749674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.749700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.749782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.749809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.749899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.749926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.750017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.750043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.750133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.750159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.750239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.750264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.750365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.750391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.750480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.750516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.750610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.750636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.750725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.750751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.750858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.750888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.750993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.751020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.751108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.751134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.751227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.751254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.751345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.751371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.751459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.751486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.751584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.751612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.751710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.751740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.751833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.751861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.751998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.752025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.752111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.752138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.752239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.752266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.752361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.752388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.752482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.752517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.752614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.752641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.752738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.476 [2024-07-11 02:46:54.752766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.476 qpair failed and we were unable to recover it. 00:41:04.476 [2024-07-11 02:46:54.752854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.752882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.752976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.753002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.753095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.753123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.753212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.753238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.753321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.753348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.753435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.753462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.753549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.753576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.753668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.753695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.753794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.753830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.753921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.753948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.754043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.754070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.754165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.754192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.754279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.754307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.754412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.754440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.754550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.754583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.754684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.754713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.754806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.754833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.754934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.754962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.755053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.755080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.755177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.755205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.755297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.755323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.755429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.755459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.755579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.755618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.755718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.755746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.755840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.755867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.755966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.755993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.756102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.756130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.756219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.756246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.756356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.756384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.756477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.756506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.756618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.756647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.756739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.756765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.756853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.756880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.756968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.756995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.757091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.757118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.757216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.477 [2024-07-11 02:46:54.757245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.477 qpair failed and we were unable to recover it. 00:41:04.477 [2024-07-11 02:46:54.757331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.757363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.757461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.757488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.757592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.757619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.757705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.757732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.757819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.757846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.757932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.757958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.758045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.758072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.758160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.758187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.758272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.758299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.758387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.758414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.758502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.758533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.758636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.758666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.758760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.758787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.758936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.758964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.759056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.759083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.759170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.759199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.759296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.759324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.759418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.759444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.759531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.759558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.759661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.759687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.759782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.759808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.759895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.759921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.760009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.760035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.760121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.760148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.760239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.760264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.760360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.760386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.760475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.760503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.760601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.760628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.760714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.760741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.760826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.760852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.478 [2024-07-11 02:46:54.760933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.478 [2024-07-11 02:46:54.760960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.478 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.761051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.761078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.761168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.761196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.761282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.761309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.761396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.761422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.761520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.761547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.761640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.761666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.761788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.761815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.761895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.761921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.762006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.762032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.762128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.762155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.762247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.762273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.762359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.762385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.762466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.762492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.762586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.762612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.762706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.762733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.762830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.762857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.762953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.762982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.763079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.763106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.763192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.763219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.763307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.763334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.763424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.763451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.763552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.763584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.763686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.763714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.479 [2024-07-11 02:46:54.763807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.479 [2024-07-11 02:46:54.763837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.479 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.763951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.763980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.764067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.764096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.764188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.764214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.764302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.764329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.764430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.764456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.764548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.764575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.764666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.764693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.764782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.764807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.764899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.764925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.765009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.765035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.765127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.765152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.765248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.765277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.765381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.765408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.765491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.765530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.765627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.765654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.765749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.765777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.765858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.765885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.765977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.766005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.766090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.766116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.766262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.766291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.766379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.766408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.766506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.766541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.766636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.766663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.766750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.766778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.766870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.766897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.767000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.767028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.767116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.767145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.767237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.767264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.767352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.767379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.767461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.767488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.767596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.767623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.480 qpair failed and we were unable to recover it. 00:41:04.480 [2024-07-11 02:46:54.767715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.480 [2024-07-11 02:46:54.767742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.767832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.767858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.767960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.767987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.768079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.768106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.768193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.768220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.768314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.768342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.768430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.768460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.768563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.768596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.768687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.768713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.768800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.768828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.768914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.768939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.769034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.769062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.769150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.769175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.769261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.769287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.769374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.769400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.769484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.769517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.769602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.769628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.769717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.769744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.769850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.769879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.769966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.769993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.770083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.770111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.770207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.770234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.770326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.770354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.770442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.770469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.770577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.770604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.770724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.770751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.770843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.770870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.770957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.770984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.771075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.771103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.771196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.771223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.771316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.771345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.771434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.771461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.481 qpair failed and we were unable to recover it. 00:41:04.481 [2024-07-11 02:46:54.771618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.481 [2024-07-11 02:46:54.771671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.771755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.771781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.771873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.771904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.772003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.772031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.772121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.772148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.772239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.772265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.772362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.772388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.772477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.772505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.772603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.772630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.772729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.772756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.772846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.772872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.772956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.772983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.773072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.773105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.773190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.773216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.773304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.773330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.773419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.773444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.773540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.773567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.773649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.773674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.773765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.773792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.773880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.773906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.773996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.774023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.774120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.774147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.774231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.774257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.774340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.774365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.774455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.774480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.774588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.774617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.774710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.774737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.774829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.774856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.774939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.774966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.775057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.775088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.775178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.775206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.775311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.775337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.775424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.775451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.775547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.775575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.775674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.775702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.482 [2024-07-11 02:46:54.775791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.482 [2024-07-11 02:46:54.775819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.482 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.775947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.775974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.776069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.776095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.776194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.776221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.776313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.776339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.776430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.776458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.776542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.776570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.776689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.776716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.776843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.776871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.776973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.776999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.777085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.777112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.777200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.777227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.777312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.777338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.777436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.777463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.777559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.777586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.777681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.777707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.777796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.777822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.777906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.777932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.778018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.778045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.778131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.778158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.778246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.778273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.778373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.778398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.778487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.778521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.778611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.778637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.778728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.778753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.778841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.778867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.778957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.778985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.779070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.779097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.779191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.779218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.779303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.779329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.483 [2024-07-11 02:46:54.779419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.483 [2024-07-11 02:46:54.779445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.483 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.779539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.779565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.779656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.779682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.779774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.779800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.779899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.779933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.780019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.780044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.780132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.780158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.780256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.780283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.780376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.780403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.780489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.780522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.780619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.780647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.780736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.780763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.780845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.780871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.780969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.780996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.781091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.781119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.781214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.781241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.781337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.781365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.781453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.781479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.781580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.781607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.781704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.781732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.781814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.781842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.781931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.781958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.782051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.782078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.782163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.782190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.782272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.782297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.782388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.782421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.782505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.782537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.782624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.782650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.782738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.782765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.782850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.782876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.782968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.782996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.484 [2024-07-11 02:46:54.783090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.484 [2024-07-11 02:46:54.783117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.484 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.783212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.783241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.783341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.783368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.783455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.783483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.783594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.783621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.783719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.783746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.783842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.783870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.783960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.783986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.784070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.784096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.784185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.784212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.784311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.784339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.784432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.784458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.784552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.784579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.784675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.784701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.784803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.784829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.784917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.784943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.785031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.785058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.785167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.785199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.785299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.785328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.785421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.785449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.785544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.785572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.785668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.785695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.785789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.785818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.785908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.785935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.786020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.786046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.786142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.786168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.786257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.786284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.786377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.485 [2024-07-11 02:46:54.786409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.485 qpair failed and we were unable to recover it. 00:41:04.485 [2024-07-11 02:46:54.786504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.786535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.786627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.786654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.786751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.786777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.786872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.786898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.786983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.787009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.787097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.787123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.787220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.787247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.787343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.787371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.787463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.787491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.787587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.787614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.787706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.787733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.787832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.787859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.787947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.787974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.788073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.788100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.788191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.788218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.788304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.788331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.788418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.788447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.788537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.788565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.788651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.788678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.788763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.788789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.788883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.788909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.788990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.789016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.789107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.789134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.789224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.789251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.789337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.789366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.789453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.789480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.789581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.789614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.789701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.789727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.789816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.789845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.789934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.789965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.790095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.790149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.790237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.790263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.790368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.790395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.790476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.790502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.486 [2024-07-11 02:46:54.790598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.486 [2024-07-11 02:46:54.790624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.486 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.790709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.790736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.790822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.790849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.790930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.790956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.791053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.791080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.791168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.791195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.791293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.791320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.791403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.791429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.791513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.791541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.791623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.791650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.791738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.791764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.791847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.791873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.791962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.791988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.792071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.792098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.792183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.792209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.792293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.792319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.792402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.792428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.792526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.792554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.792640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.792666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.792750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.792781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.792869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.792896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.792981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.793007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.793107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.793174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.793271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.793300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.793390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.793418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.793508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.793543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.793625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.793652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.793733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.793759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.793844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.793870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.793954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.793982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.794072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.794101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.794193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.794220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.794314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.794342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.794439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.794467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.794568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.794597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.794682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.794708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.794798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.794824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.794917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.794945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.795028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.795055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.795149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.795177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.795263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.487 [2024-07-11 02:46:54.795289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.487 qpair failed and we were unable to recover it. 00:41:04.487 [2024-07-11 02:46:54.795371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.795397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.795499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.795531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.795620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.795646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.795731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.795758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.795838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.795864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.795950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.795982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.796065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.796091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.796180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.796207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.796299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.796326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.796408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.796434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.796526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.796555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.796652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.796680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.796773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.796801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.796886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.796912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.796997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.797024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.797108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.797134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.797220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.797246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.797333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.797360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.797450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.797478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.797584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.797613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.797696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.797723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.797805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.797831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.797919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.797946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.798042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.798069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.798155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.798184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.798308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.798365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.798462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.798490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.798590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.798618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.798703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.798730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.798815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.798842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.798958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.798985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.799075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.799101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.799193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.799220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.799316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.799344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.799431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.799458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.799547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.799575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.799698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.799750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.799839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.799882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.799982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.800009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.800096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.800133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.488 [2024-07-11 02:46:54.800224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.488 [2024-07-11 02:46:54.800251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.488 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.800347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.800373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.800466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.800501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.800605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.800632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.800722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.800751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.800846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.800873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.800966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.800994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.801085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.801112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.801198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.801223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.801320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.801348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.801446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.801472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.801575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.801603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.801691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.801717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.801795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.801822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.801899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.801925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.802013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.802039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.802148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.802174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.802262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.802288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.802382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.802408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.802500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.802537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.802634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.802663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.802761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.802790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.802947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.802998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.489 [2024-07-11 02:46:54.803085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.489 [2024-07-11 02:46:54.803111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.489 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.803217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.803248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.803387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.803415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.803498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.803532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.803648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.803676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.803771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.803799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.803899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.803925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.804011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.804037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.804119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.804147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.804251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.804282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.804397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.804425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.804521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.804550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.804650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.804679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.804789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.804818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.804932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.804961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.805053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.805080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.805162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.805189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.805279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.805307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.805390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.805416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.805497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.805528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.805613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.805639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.805723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.805749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.805828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.775 [2024-07-11 02:46:54.805854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.775 qpair failed and we were unable to recover it. 00:41:04.775 [2024-07-11 02:46:54.805954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.805982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.806078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.806106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.806223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.806250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.806330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.806357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.806471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.806498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.806596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.806622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.806719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.806747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.806842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.806870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.806957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.806984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.807083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.807113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.807210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.807236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.807320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.807347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.807458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.807484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.807634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.807690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.807775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.807801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.807891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.807917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.808051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.808111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.808206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.808235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.808323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.808351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.808444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.808473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.808581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.808611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.808714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.808742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.808827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.808856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.808941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.808967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.809087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.809114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.809194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.809220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.809335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.809361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.809457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.809483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.809585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.809617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.809777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.809828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.809914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.809941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.810024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.810051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.810141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.810168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.810326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.810380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.810576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.810605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.810688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.810715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.810808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.810838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.810947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.810974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.811108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.811163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.811292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.811350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.811439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.811472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.811573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.811599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.811702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.811731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.811837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.811878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.811977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.812017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.812140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.812169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.812253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.812279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.812367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.812393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.812478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.812505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.812604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.812631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.812721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.812749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.812854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.812882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.812981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.813008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.813089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.813115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.813204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.813230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.813347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.813373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.813499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.813548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.813702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.813752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.813839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.813867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.813959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.813986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.814067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.814094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.814175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.814201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.776 [2024-07-11 02:46:54.814282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.776 [2024-07-11 02:46:54.814308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.776 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.814427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.814453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.814539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.814566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.814652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.814678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.814764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.814791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.814876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.814908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.814991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.815018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.815105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.815136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.815247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.815287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.815370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.815397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.815487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.815519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.815600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.815627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.815719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.815748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.815872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.815899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.816059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.816110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.816205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.816234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.816353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.816393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.816493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.816540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.816638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.816678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.816816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.816882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.816985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.817011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.817096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.817123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.817210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.817237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.817320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.817346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.817428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.817454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.817537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.817564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.817688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.817716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.817800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.817827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.817912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.817938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.818040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.818102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.818192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.818219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.818297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.818324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.818413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.818444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.818533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.818560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.818646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.818673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.818798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.818824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.818926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.818954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.819057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.819083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.819179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.819211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.819329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.819358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.819472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.819499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.819635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.819675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.819768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.819796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.819952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.820009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.820113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.820141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.820311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.820339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.820475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.820541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.820632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.820659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.820764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.820792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.820873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.820899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.820988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.821016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.821107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.821134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.821232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.821258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.821361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.821390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.821473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.821499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.821592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.821618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.821722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.821748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.821851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.821881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.822033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.822073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.822167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.822197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.822287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.822315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.822418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.777 [2024-07-11 02:46:54.822450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.777 qpair failed and we were unable to recover it. 00:41:04.777 [2024-07-11 02:46:54.822576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.822618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.822708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.822735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.822819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.822845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.822980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.823032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.823114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.823140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.823235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.823263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.823373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.823416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.823519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.823545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.823648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.823678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.823826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.823879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.823976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.824004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.824104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.824131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.824231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.824261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.824375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.824403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.824505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.824540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.824648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.824690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.824781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.824809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.824904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.824931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.825021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.825047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.825146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.825173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.825268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.825295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.825381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.825410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.825499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.825532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.825616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.825643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.825733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.825760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.825859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.825889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.825993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.826019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.826100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.826127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.826217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.826243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.826329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.826357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.826435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.826462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.826551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.826578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.826656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.826682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.826784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.826810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.826904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.826933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.827031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.827061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.827155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.827182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.827283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.827310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.827396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.827423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.827517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.827544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.827640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.827666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.827749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.827774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.827894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.827922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.828011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.828038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.828127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.828155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.828252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.828282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.828375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.828402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.828494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.828527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.828614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.828641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.828741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.828770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.828867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.828893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.829021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.829047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.829130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.829156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.829258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.829289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.829404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.829446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.829540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.829569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.778 qpair failed and we were unable to recover it. 00:41:04.778 [2024-07-11 02:46:54.829649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.778 [2024-07-11 02:46:54.829676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.829784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.829824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.829914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.829941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.830032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.830060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.830163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.830191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.830279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.830305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.830431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.830459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.830554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.830583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.830678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.830710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.830814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.830854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.830956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.830985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.831094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.831120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.831220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.831249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.831352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.831379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.831556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.831623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.831708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.831734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.831830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.831859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.831968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.831996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.832091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.832117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.832239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.832295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.832397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.832438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.832612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.832640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.832759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.832801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.832889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.832917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.833006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.833033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.833138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.833181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.833266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.833293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.833384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.833413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.833523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.833566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.833733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.833785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.833885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.833913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.834016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.834042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.834143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.834185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.834302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.834345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.834448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.834478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.834598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.834646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.834781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.834822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.834912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.834939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.835030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.835057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.835158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.835187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.835306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.835345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.835429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.835455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.835547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.835576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.835664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.835690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.835776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.835802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.835903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.835931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.836031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.836057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.836152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.836178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.836291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.836317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.836447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.836500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.836600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.836627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.836714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.836742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.836901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.836948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.837033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.837059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.837165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.837220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.837313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.837340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.837427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.837454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.837567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.837627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.837738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.837766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.837887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.837927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.838065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.838105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.838214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.838242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.838370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.838419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.838572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.779 [2024-07-11 02:46:54.838628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.779 qpair failed and we were unable to recover it. 00:41:04.779 [2024-07-11 02:46:54.838730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.838759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.838881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.838942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.839044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.839073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.839190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.839218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.839315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.839341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.839433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.839461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.839554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.839583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.839676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.839703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.839794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.839824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.839923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.839952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.840071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.840097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.840239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.840289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.840382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.840408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.840566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.840616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.840704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.840730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.840830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.840859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.840965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.840994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.841134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.841174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.841276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.841305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.841407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.841434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.841535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.841562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.841670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.841732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.841818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.841845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.841961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.842015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.842140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.842168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.842261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.842288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.842385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.842417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.842508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.842545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.842637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.842665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.842749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.842775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.842877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.842906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.842994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.843022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.843110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.843137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.843238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.843267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.843383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.843413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.843517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.843545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.843638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.843666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.843751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.843778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.843864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.843899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.844020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.844046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.844148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.844176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.844293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.844321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.844433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.844461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.844618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.844662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.844773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.844816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.844917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.844947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.845053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.845083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.845200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.845244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.845438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.845488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.845621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.845666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.845766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.845794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.845913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.845955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.846042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.846069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.846170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.846198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.846311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.846340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.846450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.846478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.846583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.846610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.846716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.846744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.846843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.846869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.846971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.847002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.847151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.847192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.847298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.847329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.847458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.847500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.780 [2024-07-11 02:46:54.847613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.780 [2024-07-11 02:46:54.847643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.780 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.847753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.847781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.847878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.847909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.847999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.848025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.848127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.848155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.848250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.848276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.848383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.848441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.848539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.848568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.848670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.848699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.848846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.848899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.848987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.849014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.849097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.849123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.849211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.849242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.849412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.849468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.849571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.849600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.849734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.849761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.849881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.849911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.850007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.850036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.850132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.850160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.850268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.850329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.850428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.850456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.850570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.850596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.850680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.850706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.850872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.850922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.851006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.851033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.851129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.851157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.851251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.851277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.851378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.851419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.851520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.851562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.851669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.851701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.851844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.851927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.852014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.852042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.852149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.852191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.852294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.852324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.852444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.852489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.852585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.852613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.852708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.852734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.852818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.852844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.853004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.853054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.853152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.853182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.853281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.853307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.853420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.853446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.853549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.853581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.853696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.853722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.853806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.853834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.853976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.854028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.854116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.854145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.854247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.854274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.854380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.854422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.854536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.854578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.854680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.854707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.854800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.854828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.854950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.854977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.855100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.855157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.855256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.855285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.855460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.855515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.855603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.855630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.855721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.781 [2024-07-11 02:46:54.855747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.781 qpair failed and we were unable to recover it. 00:41:04.781 [2024-07-11 02:46:54.855832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.855859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.855954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.855984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.856093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.856123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.856216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.856242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.856354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.856380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.856507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.856554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.856649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.856677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.856782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.856810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.856893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.856919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.857024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.857087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.857187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.857215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.857318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.857344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.857432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.857458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.857544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.857574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.857664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.857691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.857810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.857837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.857922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.857949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.858043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.858070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.858155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.858183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.858269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.858297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.858389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.858416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.858502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.858540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.858627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.858654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.858749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.858778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.858916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.858944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.859090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.859133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.859232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.859261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.859379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.859422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.859516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.859544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.859647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.859677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.859797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.859853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.859946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.859979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.860063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.860089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.860180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.860206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.860293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.860321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.860407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.860433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.860610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.860639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.860753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.860795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.860934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.860976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.861069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.861095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.861202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.861262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.861360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.861389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.861489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.861522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.861625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.861652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.861753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.861784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.861892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.861921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.862017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.862047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.862136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.862165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.862248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.862274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.862375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.862401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.862528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.862555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.862645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.862672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.862837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.862890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.862987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.863016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.863116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.863145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.863239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.863268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.863362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.863389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.863492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.863546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.863657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.863684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.863844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.863897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.863977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.864004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.864109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.864170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.864303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.782 [2024-07-11 02:46:54.864342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.782 qpair failed and we were unable to recover it. 00:41:04.782 [2024-07-11 02:46:54.864439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.864468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.864575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.864604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.864693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.864725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.864821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.864851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.864954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.864995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.865103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.865145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.865238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.865266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.865408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.865461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.865547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.865574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.865662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.865690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.865873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.865936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.866060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.866109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.866228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.866259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.866399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.866454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.866545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.866572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.866673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.866701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.866811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.866839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.866945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.866988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.867097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.867140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.867280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.867334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.867538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.867585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.867674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.867701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.867799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.867828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.867941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.867969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.868114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.868153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.868256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.868286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.868393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.868422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.868519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.868546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.868629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.868656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.868765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.868796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.868897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.868924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.869013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.869040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.869133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.869160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.869249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.869278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.869368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.869396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.869485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.869521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.869611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.869639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.869730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.869758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.869845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.869871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.869993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.870021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.870134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.870177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.870342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.870394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.870527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.870581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.870676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.870703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.870876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.870905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.871027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.871067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.871172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.871200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.871301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.871330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.871447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.871490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.871611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.871668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.871757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.871783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.871868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.871894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.871995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.872023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.872149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.872211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.872335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.872390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.872487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.872521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.872638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.872680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.872784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.872813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.872997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.783 [2024-07-11 02:46:54.873023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.783 qpair failed and we were unable to recover it. 00:41:04.783 [2024-07-11 02:46:54.873122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.873150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.873325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.873387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.873488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.873536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.873654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.873697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.873825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.873909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.874049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.874088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.874199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.874242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.874327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.874354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.874447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.874474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.874568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.874596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.874695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.874730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.874838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.874866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.874955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.874982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.875102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.875128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.875218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.875244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.875330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.875359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.875481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.875525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.875696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.875747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.875848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.875877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.875994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.876036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.876120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.876146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.876352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.876416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.876555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.876617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.876725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.876767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.876959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.876986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.877068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.877094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.877213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.877240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.877336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.877364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.877459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.877485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.877587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.877616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.877715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.877743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.877837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.877865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.877954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.877982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.878079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.878105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.878207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.878235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.878322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.878349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.878436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.878462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.878561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.878589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.878702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.878744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.878848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.878879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.879018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.879060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.879162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.879196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.879318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.879360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.879478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.879527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.879632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.879674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.879782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.879824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.879935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.879978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.880063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.880090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.880193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.880234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.880320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.880346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.880442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.880470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.880590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.880631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.880737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.880767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.880891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.880936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.881029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.881057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.881144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.881171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.881273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.881332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.881434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.881465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.881614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.881643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.881736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.881764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.881845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.881871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.784 [2024-07-11 02:46:54.881985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.784 [2024-07-11 02:46:54.882011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.784 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.882099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.882126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.882220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.882252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.882366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.882410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.882507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.882540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.882644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.882674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.882795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.882838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.882937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.882967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.883089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.883133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.883247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.883288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.883393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.883434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.883520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.883547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.883651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.883693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.883780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.883806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.883913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.883966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.884072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.884113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.884217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.884276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.884369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.884397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.884502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.884549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.884640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.884668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.884751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.884777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.884873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.884902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.885039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.885093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.885180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.885208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.885328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.885382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.885486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.885541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.885674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.885718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.885828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.885857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.885979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.886021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.886137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.886194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.886314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.886364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.886476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.886506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.886615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.886646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.886758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.886788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.886905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.886948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.887044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.887073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.887180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.887221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.887307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.887334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.887419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.887445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.887537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.887564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.887660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.887686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.887772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.887798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.887898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.887927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.888044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.888085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.888186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.888215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.888309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.888337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.888436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.888465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.888597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.888640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.888742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.888772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.888867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.888893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.888988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.889015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.889106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.889133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.889213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.889240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.889321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.889347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.889428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.889454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.889539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.889566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.889652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.889680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.889783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.889813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.889907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.889934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.890023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.890049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.890135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.890161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.890251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.890277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.890364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.890391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.890498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.890549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.785 [2024-07-11 02:46:54.890656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.785 [2024-07-11 02:46:54.890717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.785 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.890805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.890832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.890942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.890984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.891086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.891115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.891244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.891287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.891398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.891441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.891535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.891567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.891676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.891718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.891804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.891831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.891922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.891949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.892051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.892093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.892181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.892208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.892295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.892322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.892454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.892480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.892598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.892640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.892729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.892756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.892844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.892871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.892958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.892984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.893076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.893102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.893189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.893215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.893308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.893338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.893427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.893455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.893554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.893582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.893670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.893697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.893822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.893876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.893994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.894049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.894152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.894193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.894306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.894348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.894429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.894455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.894567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.894610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.894710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.894740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.894851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.894881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.895004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.895066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.895180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.895238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.895349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.895390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.895496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.895546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.895629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.895656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.895771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.895825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.895952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.896005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.896096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.896124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.896227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.896268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.896358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.896386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.896475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.896501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.896597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.896623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.896713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.896740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.896830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.896856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.896945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.896974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.897077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.897104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.897195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.897224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.897315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.897341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.897434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.897461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.897551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.897580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.897665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.897692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.897785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.897813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.897905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.897933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.898032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.898059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.898166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.898208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.898315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.898345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.786 qpair failed and we were unable to recover it. 00:41:04.786 [2024-07-11 02:46:54.898451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.786 [2024-07-11 02:46:54.898480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.898607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.898639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.898765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.898817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.898928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.898988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.899142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.899193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.899303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.899345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.899454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.899498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.899604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.899632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.899736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.899766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.899862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.899888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.899986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.900016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.900131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.900172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.900278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.900320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.900404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.900430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.900527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.900554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.900639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.900666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.900769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.900797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.900891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.900919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.901015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.901044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.901131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.901158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.901252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.901281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.901368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.901396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.901484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.901522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.901615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.901641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.901736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.901761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.901895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.901921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.902012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.902040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.902141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.902170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.902258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.902284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.902394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.902436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.902525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.902553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.902657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.902711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.902820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.902862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.902946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.902975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.903059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.903085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.903183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.903212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.903302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.903329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.903420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.903451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.903546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.903581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.903670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.903697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.903815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.903861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.903973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.904025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.904118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.904149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.904240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.904268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.904353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.904381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.904472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.904500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.904609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.904640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.904757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.904798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.904885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.904913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.905017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.905046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.905157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.905186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.905287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.905316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.905409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.905444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.905546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.905575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.905665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.905691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.905793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.905820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.905913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.905941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.906029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.906057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.906149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.906177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.906262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.906289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.906379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.906405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.906487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.906518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.906627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.906671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.906780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.906824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.787 [2024-07-11 02:46:54.906927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.787 [2024-07-11 02:46:54.906969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.787 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.907059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.907085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.907175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.907202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.907305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.907344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.907451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.907478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.907590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.907638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.907732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.907758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.907840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.907866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.907953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.907980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.908073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.908100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.908206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.908235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.908357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.908399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.908484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.908518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.908621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.908663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.908774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.908816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.908906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.908932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.909041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.909083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.909185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.909226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.909333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.909375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.909484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.909534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.909623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.909650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.909757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.909800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.909904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.909948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.910035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.910063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.910169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.910211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.910297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.910324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.910427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.910460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.910588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.910618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.910740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.910784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.910883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.910913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.911038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.911079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.911190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.911236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.911351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.911395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.911496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.911544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.911662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.911721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.911816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.911846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.911992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.912050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.912182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.912212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.912313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.912339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.912433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.912460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.912550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.912577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.912666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.912693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.912789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.912819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.912942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.912983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.913092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.913121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.913263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.913304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.913397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.913423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.913567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.913629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.913739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.913780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.913921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.913980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.914078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.914105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.914199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.914228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.914356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.914389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.914487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.914521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.914623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.914665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.914771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.914801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.914914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.914943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.915064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.915107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.915204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.915231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.915330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.915368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.915524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.915576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.915670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.915698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.915784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.915811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.915892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.915919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.916010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.788 [2024-07-11 02:46:54.916038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.788 qpair failed and we were unable to recover it. 00:41:04.788 [2024-07-11 02:46:54.916143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.916172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.916275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.916301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.916386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.916413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.916518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.916548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.916647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.916673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.916770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.916797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.916879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.916904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.917011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.917053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.917187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.917246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.917333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.917360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.917468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.917525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.917631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.917672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.917775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.917816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.917921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.917963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.918072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.918114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.918217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.918268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.918363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.918393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.918486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.918520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.918637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.918681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.918786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.918828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.918915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.918942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.919029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.919060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.919164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.919191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.919303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.919345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.919435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.919463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.919579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.919625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.919713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.919740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.919852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.919893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.919986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.920013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.920122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.920184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.920285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.920311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.920393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.920422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.920505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.920539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.920628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.920657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.920760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.920790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.920897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.920923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.921009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.921035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.921125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.921153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.921244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.921271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.921355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.921382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.921466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.921494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.921602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.921643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.921747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.921776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.921904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.921947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.922069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.922107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.922228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.922269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.922375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.922402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.922508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.922559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.922672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.922716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.922826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.922878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.922986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.923034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.923122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.923148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.923245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.923275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.923396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.923425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.923527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.923553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.923665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.923706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.923805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.923834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.923934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.923959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.924068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.789 [2024-07-11 02:46:54.924097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.789 qpair failed and we were unable to recover it. 00:41:04.789 [2024-07-11 02:46:54.924217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.924247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.924362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.924392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.924507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.924554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.924670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.924712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.924798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.924826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.924932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.924975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.925084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.925115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.925218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.925245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.925347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.925373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.925454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.925481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.925600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.925643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.925731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.925757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.925844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.925871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.925966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.926008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.926124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.926166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.926278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.926327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.926434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.926483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.926603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.926649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.926756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.926800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.926910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.926953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.927062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.927106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.927209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.927252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.927360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.927402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.927491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.927524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.927638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.927680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.927785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.927815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.927940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.927982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.928091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.928135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.928244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.928288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.928381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.928408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.928519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.928553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.928671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.928713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.928817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.928850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.928960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.928988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.929099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.929144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.929253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.929286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.929401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.929428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.929534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.929561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.929706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.929753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.929912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.929942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.930064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.930093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.930239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.930293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.930397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.930441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.930550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.930581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.930685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.930716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.930853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.930880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.930981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.931010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.931107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.931134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.931223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.931250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.931356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.931397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.931504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.931560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.931696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.931738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.931860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.931901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.931997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.932027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.932148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.932209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.932344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.932373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.932496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.932566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.932685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.932728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.932844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.932885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.933034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.933079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.933172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.933200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.933306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.933349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.933452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.933485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.790 qpair failed and we were unable to recover it. 00:41:04.790 [2024-07-11 02:46:54.933625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.790 [2024-07-11 02:46:54.933667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.933814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.933859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.933981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.934011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.934132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.934175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.934281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.934312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.934432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.934475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.934623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.934664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.934805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.934852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.934943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.934970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.935060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.935087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.935201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.935245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.935377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.935421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.935533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.935576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.935693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.935734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.935844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.935885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.935988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.936030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.936132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.936161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.936285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.936329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.936424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.936451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.936569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.936617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.936728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.936775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.936868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.936895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.936993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.937023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.937146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.937181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.937304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.937352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.937462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.937498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.937631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.937675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.937781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.937812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.937938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.937980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.938081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.938111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.938221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.938247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.938335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.938362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.938449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.938476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.938590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.938618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.938715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.938743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.938832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.938860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.938949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.938977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.939101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.939131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.939224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.939252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.939361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.939404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.939495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.939529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.939619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.939646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.939761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.939798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.939949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.939992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.940094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.940124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.940233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.940259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.940340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.940366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.940459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.940488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.940597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.940625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.940719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.940745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.940853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.940887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.940994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.941020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.941124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.941165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.941269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.941312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.941400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.941427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.941521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.941548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.941639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.941665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.941772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.941815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.941925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.941966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.942061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.942090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.942202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.942244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.942339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.942370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.942501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.942550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.942684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.791 [2024-07-11 02:46:54.942728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.791 qpair failed and we were unable to recover it. 00:41:04.791 [2024-07-11 02:46:54.942838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.942879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.942989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.943032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.943122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.943149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.943234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.943261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.943354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.943381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.943488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.943536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.943667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.943697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.943817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.943858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.943987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.944033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.944164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.944208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.944324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.944367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.944472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.944522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.944643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.944686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.944793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.944836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.944945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.944986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.945071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.945098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.945242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.945287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.945397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.945438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.945573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.945616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.945727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.945768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.945893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.945940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.946049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.946078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.946197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.946241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.946356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.946402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.946524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.946567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.946683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.946715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.946824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.946854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.946956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.946984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.947133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.947178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.947271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.947300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.947395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.947423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.947514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.947543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.947651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.947692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.947820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.947864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.948022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.948064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.948156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.948185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.948311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.948356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.948469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.948521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.948614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.948641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.948758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.948799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.948919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.948949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.949081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.949115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.949234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.949275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.949364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.949390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.949481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.949519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.949619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.949645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.949738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.949765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.949876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.949903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.949993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.950019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.950127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.950154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.950278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.950326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.950430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.950457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.950577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.950619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.950725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.950768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.950877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.950906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.951023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.951063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.951185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.951230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.792 qpair failed and we were unable to recover it. 00:41:04.792 [2024-07-11 02:46:54.951358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.792 [2024-07-11 02:46:54.951401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.951502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.951540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.951670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.951712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.951802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.951829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.951948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.951988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.952100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.952142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.952252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.952297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.952413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.952455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.952555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.952583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.952701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.952743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.952849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.952891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.952994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.953021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.953136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.953178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.953303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.953348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.953454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.953496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.953614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.953656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.953764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.953805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.953923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.953966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.954058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.954085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.954206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.954236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.954364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.954413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.954531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.954559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.954681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.954712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.954830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.954856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.954945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.954971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.955079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.955122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.955233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.955278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.955371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.955398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.955499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.955531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.955625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.955652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.955766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.955807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.955917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.955960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.956067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.956110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.956217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.956258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.956386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.956417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.956538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.956569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.956671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.956697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.956784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.956810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.956920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.956961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.957064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.957105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.957190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.957216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.957321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.957365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.957458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.957484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.957594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.957622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.957731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.957773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.957875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.957907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.958007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.958034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.958120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.958148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.958240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.958267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.958360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.958387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.958471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.958497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.958627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.958669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.958769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.958800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.958922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.958952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.959067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.959096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.959205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.959235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.959352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.959396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.959500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.959551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.959665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.959696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.959794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.959820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.959920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.959949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.793 [2024-07-11 02:46:54.960062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.793 [2024-07-11 02:46:54.960092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.793 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.960193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.960220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.960307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.960334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.960421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.960447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.960583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.960625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.960738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.960780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.960878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.960907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.961023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.961052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.961169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.961199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.961315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.961348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.961467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.961520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.961635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.961677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.961792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.961824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.961949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.961982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.962099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.962126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.962239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.962271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.962394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.962436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.962527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.962554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.962661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.962691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.962790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.962816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.962923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.962954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.963067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.963097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.963220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.963250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.963377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.963409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.963515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.963542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.963653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.963694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.963809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.963839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.963974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.964015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.964114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.964145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.964265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.964306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.964395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.964425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.964530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.964569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.964674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.964718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.964842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.964886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.964971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.964998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.965125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.965185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.965338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.965365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.965473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.965502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.965617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.965643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.965741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.965773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.965880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.965910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.966016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.966061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.966148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.966176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.966281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.966323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.966412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.966439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.966540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.966587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.966699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.966742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.966855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.966898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.967004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.967047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.967143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.967171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.967267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.967297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.967399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.967426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.967523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.967557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.967664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.967695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.967807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.967835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.967924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.967951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.968058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.968086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.968183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.968213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.968326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.968355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.968475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.794 [2024-07-11 02:46:54.968523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.794 qpair failed and we were unable to recover it. 00:41:04.794 [2024-07-11 02:46:54.968625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.968653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.968757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.968791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.968912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.968954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.969060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.969103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.969194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.969221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.969312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.969338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.969433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.969463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.969582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.969626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.969720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.969747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.969832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.969858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.969983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.970014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.970145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.970186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.970279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.970307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.970408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.970439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.970565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.970597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.970699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.970727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.970829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.970855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.970958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.970995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.971082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.971109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.971199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.971226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.971335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.971381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.971490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.971539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.971638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.971666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.971785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.971813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.971927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.971971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.972086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.972130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.972243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.972285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.972399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.972440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.972537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.972565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.972670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.972711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.972796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.972822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.972920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.972949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.973068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.973109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.973224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.973253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.973358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.973385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.973484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.973539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.973648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.973675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.973789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.973816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.973934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.973961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.974072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.974099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.974201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.974231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.974329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.974358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.974476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.974504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.974633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.974660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.974781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.974821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.974932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.974960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.975078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.975120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.975242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.975284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.975372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.975398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.975492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.975527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.975640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.975680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.795 [2024-07-11 02:46:54.975794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.795 [2024-07-11 02:46:54.975824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.795 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.975930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.975957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.976061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.976091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.976205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.976233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.976349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.976377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.976499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.976554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.976669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.976710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.976812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.976840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.976938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.976964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.977051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.977082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.977198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.977226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.977360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.977401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.977484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.977522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.977628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.977655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.977754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.977781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.977876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.977905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.977993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.978022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.978120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.978148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.978281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.978307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.978395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.978422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.978520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.978547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.978638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.978665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.978749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.978775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.978870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.978896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.978988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.979014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.979110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.979136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.979227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.979254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.979344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.979370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.979454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.979480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.979578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.979605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.979703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.979732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.979828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.979856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.979957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.979984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.980081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.980108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.980203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.980232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.980324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.980351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.980447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.980479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.980575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.980602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.980697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.980724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.980815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.980842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.980938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.980965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.981045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.981071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.981159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.981188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.981276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.981303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.981409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.981439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.981542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.981591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.981690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.981717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.981807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.981834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.981930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.981964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.982073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.982101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.982190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.982220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.982340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.982368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.982457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.982484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.982578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.982605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.982687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.982713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.982797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.982823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.982922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.982948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.983033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.983059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.983144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.983171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.983263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.983292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.983388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.983416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.983514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.796 [2024-07-11 02:46:54.983543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.796 qpair failed and we were unable to recover it. 00:41:04.796 [2024-07-11 02:46:54.983638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.983664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.983753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.983786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.983879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.983907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.984002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.984030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.984123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.984152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.984255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.984284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.984373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.984401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.984493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.984528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.984624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.984652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.984756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.984785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.984888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.984915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.985011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.985040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.985133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.985160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.985247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.985273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.985360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.985387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.985489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.985528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.985624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.985653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.985746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.985773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.985868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.985895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.985987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.986014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.986100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.986126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.986212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.986238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.986332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.986360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.986446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.986474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.986570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.986597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.986689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.986717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.986808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.986834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.986926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.986953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.987045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.987074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.987163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.987190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.987278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.987306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.987403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.987430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.987522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.987551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.987639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.987666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.987755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.987781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.987867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.987893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.987983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.988009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.988097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.988124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.988211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.988238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.988332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.988358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.988455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.988484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.988581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.988613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.988705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.988731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.988821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.988847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.988935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.988961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.989045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.989072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.989160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.989188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.989282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.989308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.989404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.989431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.989524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.989552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.989645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.989671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.989765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.989791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.989886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.989914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.990001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.990027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.990127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.990157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.990265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.990294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.990386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.990420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.990539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.990568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.990657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.990685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.797 qpair failed and we were unable to recover it. 00:41:04.797 [2024-07-11 02:46:54.990779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.797 [2024-07-11 02:46:54.990806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.990906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.990934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.991017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.991044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.991130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.991156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.991240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.991266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.991356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.991382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.991472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.991498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.991599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.991626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.991717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.991743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.991839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.991870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.991955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.991982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.992061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.992087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.992179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.992206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.992295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.992323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.992418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.992447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.992548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.992584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.992679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.992706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.992793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.992842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.992945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.992975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.993068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.993095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.993184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.993210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.993301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.993327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.993422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.993448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.993547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.993574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.993656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.993683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.993773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.993800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.993894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.993923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.994017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.994044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.994130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.994159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.994252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.994280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.994371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.994397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.994495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.994529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.994622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.994650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.994738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.994766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.994865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.994893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.994993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.995021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.995117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.995147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.995236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.995262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.995352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.995379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.995465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.995492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.995590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.995616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.995703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.995730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.995812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.995838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.995926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.995952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.996057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.996086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.996177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.996204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.996297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.996326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.996418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.996445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.996547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.996575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.996671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.996699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.996839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.996866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.996964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.996996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.997091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.997118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.997206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.997236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.997334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.997361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.997443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.997470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.997575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.997602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.997687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.997713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.997800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.997827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.997916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.997943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.998035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.998064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.998156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.998185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.998279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.798 [2024-07-11 02:46:54.998307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.798 qpair failed and we were unable to recover it. 00:41:04.798 [2024-07-11 02:46:54.998399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.998426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.998526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.998553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.998649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.998679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.998774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.998801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.998894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.998921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.999011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.999039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.999126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.999153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.999242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.999269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.999356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.999384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.999477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.999503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.999649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.999675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.999760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.999786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.999885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:54.999911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:54.999994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.000021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.000113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.000140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.000235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.000265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.000357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.000393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.000489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.000527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.000629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.000656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.000748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.000774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.000865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.000892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.000983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.001011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.001108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.001136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.001229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.001259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.001362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.001391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.001484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.001524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.001642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.001677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.001782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.001810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.001899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.001925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.002018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.002045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.002139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.002166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.002255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.002281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.002372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.002400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.002492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.002528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.002623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.002650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.002739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.002765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.002846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.002872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.002961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.002987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.003074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.003101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.003192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.003222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.003316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.003348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.003438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.003464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.003558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.003586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.003677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.003703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.003794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.003821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.003914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.003941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.004030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.004059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.004153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.004182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.004272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.004300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.004387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.004414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.004499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.004531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.004624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.004651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.004737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.004763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.004846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.004873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.004969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.004998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.005095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.005124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.005217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.005244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.005326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.005353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.005475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.005502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.005608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.005635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.005723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.005750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.005845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.799 [2024-07-11 02:46:55.005872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.799 qpair failed and we were unable to recover it. 00:41:04.799 [2024-07-11 02:46:55.005962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.005991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.006079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.006106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.006201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.006230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.006313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.006340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.006463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.006491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.006597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.006625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.006715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.006744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.006863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.006890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.006976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.007003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.007092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.007119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.007239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.007265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.007356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.007383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.007465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.007491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.007589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.007616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.007713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.007740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.007832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.007859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.007983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.008011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.008101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.008127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.008221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.008253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.008345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.008374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.008468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.008495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.008599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.008627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.008747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.008774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.008872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.008900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.008989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.009016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.009108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.009134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.009225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.009251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.009340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.009368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.009505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.009548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.009682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.009709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.009795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.009821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.009911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.009937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.010026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.010053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.010144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.010173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.010265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.010293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.010379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.010406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.010489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.010527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.010616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.010643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.010759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.010785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.010905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.010932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.011023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.011052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.011147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.011176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.011298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.011324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.011417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.011445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.011529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.011557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.800 [2024-07-11 02:46:55.011651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.800 [2024-07-11 02:46:55.011686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.800 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.011808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.011835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.011929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.011956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.012045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.012072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.012157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.012184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.012276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.012304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.012395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.012422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.012523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.012551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.012674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.012701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.012793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.012821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.012915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.012941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.013028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.013054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.013142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.013168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.013255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.013281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.013376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.013404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.013530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.013558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.013647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.013675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.013763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.013793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.013887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.013916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.014017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.014046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.014139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.014165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.014245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.014271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.014359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.014385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.014476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.014504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.014602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.014628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.014739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.014765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.014886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.014914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.015016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.015045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.015139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.015168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.015264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.015292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.015384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.015411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.015497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.015530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.015616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.015643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.015730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.015758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.015850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.015879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.015991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.016018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.016118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.016147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.016254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.016281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.016369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.016397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.016488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.016519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.016612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.016644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.016733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.016761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.016851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.016877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.016967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.016994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.017080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.017107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.017199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.017226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.017322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.017350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.017437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.017463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.017557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.017584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.017680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.017706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.017798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.017824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.017911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.017937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.018024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.018050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.018151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.018180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.018282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.018310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.018418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.018446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.018535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.018563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.018676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.018705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.018813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.018842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.018934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.018962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.019065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.019092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.019178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.019206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.019291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.019317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.801 [2024-07-11 02:46:55.019428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.801 [2024-07-11 02:46:55.019455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.801 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.019555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.019582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.019673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.019701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.019793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.019821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.019911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.019940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.020030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.020057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.020142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.020169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.020263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.020291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.020386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.020414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.020504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.020538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.020628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.020655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.020736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.020763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.020852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.020878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.020968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.020995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.021091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.021119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.021202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.021230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.021329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.021357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.021454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.021482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.021591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.021620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.021704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.021731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.021819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.021847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.021932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.021959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.022057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.022085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.022173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.022200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.022286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.022312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.022397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.022424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.022523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.022551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.022642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.022669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.022760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.022786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.022874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.022901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.022992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.023019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.023110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.023138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.023231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.023260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.023461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.023487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.023582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.023611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.023699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.023725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.023827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.023854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.023945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.023973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.024057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.024084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.024177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.024207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.024299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.024326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.024422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.024450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.024534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.024562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.024647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.024673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.024765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.024798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.024894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.024922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.025018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.025046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.025135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.025161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.025245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.025272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.025356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.025382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.025468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.025494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.025593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.025622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.025716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.025745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.025846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.025874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.025964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.025990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.026079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.026106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.026200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.026229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.026310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.026337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.026434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.026460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.026556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.026583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.026786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.026812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.026899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.026925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.027013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.027041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.027135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.027163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.802 [2024-07-11 02:46:55.027255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.802 [2024-07-11 02:46:55.027284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.802 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.027376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.027404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.027494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.027527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.027617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.027645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.027738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.027765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.027853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.027882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.027973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.028000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.028091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.028119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.028318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.028345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.028434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.028460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.028563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.028590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.028676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.028702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.028792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.028818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.028900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.028926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.029012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.029038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.029125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.029154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.029240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.029267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.029350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.029377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.029461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.029488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.029586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.029614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.029704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.029731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.029822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.029849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.029949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.029978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.030071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.030098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.030180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.030208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.030293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.030320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.030409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.030435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.030521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.030548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.030636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.030662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.030755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.030785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.030869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.030896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.030983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.031010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.031105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.031132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.031220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.031247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.031345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.031373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.031465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.031492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.031587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.031616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.031717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.031744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.031829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.031855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.031938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.031964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.032052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.032079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.032167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.032196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.032286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.032313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.032405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.032431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.032518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.032545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.032633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.032660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.032751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.032777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.032873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.032905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.033108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.033134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.033221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.033247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.033342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.033371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.033455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.033482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.033579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.033606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.033689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.033715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.033803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.033829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.033918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.033944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.034032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.034060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.034143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.034169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.803 qpair failed and we were unable to recover it. 00:41:04.803 [2024-07-11 02:46:55.034261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.803 [2024-07-11 02:46:55.034291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.034379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.034407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.034495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.034529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.034621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.034648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.034735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.034761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.034846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.034873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.034964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.034991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.035083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.035110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.035311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.035339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.035437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.035463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.035559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.035589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.035679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.035706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.035797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.035826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.035907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.035934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.036025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.036054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.036146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.036174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.036264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.036296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.036391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.036417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.036506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.036537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.036621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.036647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.036735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.036765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.036849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.036875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.036964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.036993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.037091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.037117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.037204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.037231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.037320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.037347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.037436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.037464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.037563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.037592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.037689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.037717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.037808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.037836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.037934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.037961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.038060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.038088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.038179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.804 [2024-07-11 02:46:55.038207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.804 qpair failed and we were unable to recover it. 00:41:04.804 [2024-07-11 02:46:55.038297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.038326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.038413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.038440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.038534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.038562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.038648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.038674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.038774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.038802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.038887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.038913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.039007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.039036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.039126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.039153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.039242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.039270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.039362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.039390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.039480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.039508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.039623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.039652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.039743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.039771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.039855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.039881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.039972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.039999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.040081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.040108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.040193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.040220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.040303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.040329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.040421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.040449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.040533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.040561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.040655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.040683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.040771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.040800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.040887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.040915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.041007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.041039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.041141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.041170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.041252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.041279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.041366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.041395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.041478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.041506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.041604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.041631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.041720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.041747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.041828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.041855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.041950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.041976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.042061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.042088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.042179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.042207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.805 qpair failed and we were unable to recover it. 00:41:04.805 [2024-07-11 02:46:55.042300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.805 [2024-07-11 02:46:55.042329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.042420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.042447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.042538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.042566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.042661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.042688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.042773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.042800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.042890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.042918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.043006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.043033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.043120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.043148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.043240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.043269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.043365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.043393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.043486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.043522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.043608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.043635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.043722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.043749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.043831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.043857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.043941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.043967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.044054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.044081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.044168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.044199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.044296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.044324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.044411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.044439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.044535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.044565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.044659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.044685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.044771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.044798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.044888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.044915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.044999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.045026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.045118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.045145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.045236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.045263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.045350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.045376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.045466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.045494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.045592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.045619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.045707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.045734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.045827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.045854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.045943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.045969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.046049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.046075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.046168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.046195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.046283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.046312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.046401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.046428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.046521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.046550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.806 [2024-07-11 02:46:55.046641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.806 [2024-07-11 02:46:55.046668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.806 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.046756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.046786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.046883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.046912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.047000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.047027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.047115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.047141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.047224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.047251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.047344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.047372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.047457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.047484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.047588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.047616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.047708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.047736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.047829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.047857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.047939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.047966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.048048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.048074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.048167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.048195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.048284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.048311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.807 qpair failed and we were unable to recover it. 00:41:04.807 [2024-07-11 02:46:55.048401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.807 [2024-07-11 02:46:55.048428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.048525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.048553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.048639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.048665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.048757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.048784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.048868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.048900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.049000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.049027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.049120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.049147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.049233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.049262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.049358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.049387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.049480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.049507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.049610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.049637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.049730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.049758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.049849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.049878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.049978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.050006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.050096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.050124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.050208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.050234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.050325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.050353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.050451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.050478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.050577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.050604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.050687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.050713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.050795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.050823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.050915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.050944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.051042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.051069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.051158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.051186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.051276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.051303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.051395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.051423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.051522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.051550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.051640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.051668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.051755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.808 [2024-07-11 02:46:55.051782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.808 qpair failed and we were unable to recover it. 00:41:04.808 [2024-07-11 02:46:55.051867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.051895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.051982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.052008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.052099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.052128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.052223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.052249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.052341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.052367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.052499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.052530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.052614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.052640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.052731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.052758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.052842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.052869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.052963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.052989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.053083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.053109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.053199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.053227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.053316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.053346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.053433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.053459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.053551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.053579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.053668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.053695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.053795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.053823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.053912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.053940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.054038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.054065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.054157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.054186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.054287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.054313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.054398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.054426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.054515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.054542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.054631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.054658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.054751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.054777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.054860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.054886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.054974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.055000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.055079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.055107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.055199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.055228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.055331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.055360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.055455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.055483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.055586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.055613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.055699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.055729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.055828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.055856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.055945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.055974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.056069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.056096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.056187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.056215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.809 qpair failed and we were unable to recover it. 00:41:04.809 [2024-07-11 02:46:55.056310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.809 [2024-07-11 02:46:55.056338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.056427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.056454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.056549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.056579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.056675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.056701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.056789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.056816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.056898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.056929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.057016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.057042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.057137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.057164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.057261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.057290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.057382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.057410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.057495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.057526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.057616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.057643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.057733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.057760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.057843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.057870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.057953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.057980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.058072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.058099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.058187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.058215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.058300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.058327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.058422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.058450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.058543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.058570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.058655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.058682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.058764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.058791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.058877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.058903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.058994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.059020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.059106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.059132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.059220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.059246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.059332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.059358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.059447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.059473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.059563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.059590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.059673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.059699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.059778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.059804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.059883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.059909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.060002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.060033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.060124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.060153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.060251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.060280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.060370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.060397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.810 [2024-07-11 02:46:55.060480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.810 [2024-07-11 02:46:55.060506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.810 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.060602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.060629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.060722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.060750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.060833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.060860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.060950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.060976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.061068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.061098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.061193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.061221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.061317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.061345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.061439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.061466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.061561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.061589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.061682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.061709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.061795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.061822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.061909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.061936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.062026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.062054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.062150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.062177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.062262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.062289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.062424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.062453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.062552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.062579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.062665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.062691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.062779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.062805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.062904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.062931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.063020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.063047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.063129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.063156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.063247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.063276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.063361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.063387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.063481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.063515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.063602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.063629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.063721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.063747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.063885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.063914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.064008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.064035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.064123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.064149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.064238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.064264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.064355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.064382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.064472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.064499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.064597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.064625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.064711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.064738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.064838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.064869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.064960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.064987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.065078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.811 [2024-07-11 02:46:55.065105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.811 qpair failed and we were unable to recover it. 00:41:04.811 [2024-07-11 02:46:55.065189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.065216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.065303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.065330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.065411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.065437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.065534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.065561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.065651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.065678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.065765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.065792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.065884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.065915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.066005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.066034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.066124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.066150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.066236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.066262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.066348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.066374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.066469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.066495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.066588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.066614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.066696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.066723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.066817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.066846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.066937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.066965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.067058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.067085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.067174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.067201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.067288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.067314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.067407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.067433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.067531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.067559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.067642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.067671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.067764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.067792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.067882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.067910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.067999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.068030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.068119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.068146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.068229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.068255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.068339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.068366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.068461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.068496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.068605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.068633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.068717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.068744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.068835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.812 [2024-07-11 02:46:55.068863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.812 qpair failed and we were unable to recover it. 00:41:04.812 [2024-07-11 02:46:55.068950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.068977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.069064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.069091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.069185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.069212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.069302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.069329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.069426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.069453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.069545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.069573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.069671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.069697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.069791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.069818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.069914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.069943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.070035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.070063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.070153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.070181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.070269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.070296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.070379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.070406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.070493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.070528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.070613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.070639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.070735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.070761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.070851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.070879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.070974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.071001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.071092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.071122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.071211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.071239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.071332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.071358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.071452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.071479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.071576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.071603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.071691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.071717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.071802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.071830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.071929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.071955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.072050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.072079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.072169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.072195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.072277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.072303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.072393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.072419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.072517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.072544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.072629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.072655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.072753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.072780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.072885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.072912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.073004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.073031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.813 [2024-07-11 02:46:55.073115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.813 [2024-07-11 02:46:55.073143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.813 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.073236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.073267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.073367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.073397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.073498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.073535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.073624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.073651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.073738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.073765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.073855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.073881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.073976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.074004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.074098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.074126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.074218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.074246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.074344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.074373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.074473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.074502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.074601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.074628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.074722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.074750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.074842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.074869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.074955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.074982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.075066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.075098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.075185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.075214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.075302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.075328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.075423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.075451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.075551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.075578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.075670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.075700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.075797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.075824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.075915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.075942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.076025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.076056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.076153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.076181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.076270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.076297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.076378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.076405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.076487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.076519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.076602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.076628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.076720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.076746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.076843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.076869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.077070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.077098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.077178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.077204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.077292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.814 [2024-07-11 02:46:55.077318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.814 qpair failed and we were unable to recover it. 00:41:04.814 [2024-07-11 02:46:55.077408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.077435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.077525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.077552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.077636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.077662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.077756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.077782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.077866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.077892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.077973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.078000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.078088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.078116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.078212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.078239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.078324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.078351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.078436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.078462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.078548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.078575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.078672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.078702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.078785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.078811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.078892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.078919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.079011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.079039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.079132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.079160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.079251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.079280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.079364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.079391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.079483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.079515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.079605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.079631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.079722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.079748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.079841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.079868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.079957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.079983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.080077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.080105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.080190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.080217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.080419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.080447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.080543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.080570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.080661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.080689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.080772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.080799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.080900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.080927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.081021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.081047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.081138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.081166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.081261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.081291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.081385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.081413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.081523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.081551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.081641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.081669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.081759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.081786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.815 [2024-07-11 02:46:55.081872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.815 [2024-07-11 02:46:55.081900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.815 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.081996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.082024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.082114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.082142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.082226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.082253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.082340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.082366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.082457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.082483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.082585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.082612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.082701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.082730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.082823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.082849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.082944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.082970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.083059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.083086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.083180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.083209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.083301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.083328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.083419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.083448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.083544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.083572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.083663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.083692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.083779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.083806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.083900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.083928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.084014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.084042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.084135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.084168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.084368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.084394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.084481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.084507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.084600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.084627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.084718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.084746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.084835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.084861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.084954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.084982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.085061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.085087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.085176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.085205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.085302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.085331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.085420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.085447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.085544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.085571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.085660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.085687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.085776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.085802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.085896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.085923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.086012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.086039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.086129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.816 [2024-07-11 02:46:55.086155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.816 qpair failed and we were unable to recover it. 00:41:04.816 [2024-07-11 02:46:55.086251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.086277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.086367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.086396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.086494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.086526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.086616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.086643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.086739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.086766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.086858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.086887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.086975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.087003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.087098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.087127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.087215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.087242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.087334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.087362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.087450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.087478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.087580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.087610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.087699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.087727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.087816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.087843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.087932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.087958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.088040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.088068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.088163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.088189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.088275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.088301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.088389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.088416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.088500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.088532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.088614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.088640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.088724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.088751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.088843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.088871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.088967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.089000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.089097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.089124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.089212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.089238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.089324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.089351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.089432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.089459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.089550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.089578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.089668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.089694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.089781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.089807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.089891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.089917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.090000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.090028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.817 [2024-07-11 02:46:55.090128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.817 [2024-07-11 02:46:55.090157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.817 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.090248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.090275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.090355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.090382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.090482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.090517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.090621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.090649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.090732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.090760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.090841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.090868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.090951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.090977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.091063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.091089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.091176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.091204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.091296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.091322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.091408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.091434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.091530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.091559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.091653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.091680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.091769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.091797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.091885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.091912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.092001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.092029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.092120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.092153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.092243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.092270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.092363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.092392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.092486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.092519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.092622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.092650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.092741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.092769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.092862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.092889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.092978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.093005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.093095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.093122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.093203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.093230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.093323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.093350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.093439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.093467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.093571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.093600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.093695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.093721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.093811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.093837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.093917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.093945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.094038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.094065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.094154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.094180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.094275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.094303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.094388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.094417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.094506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.094544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.818 [2024-07-11 02:46:55.094639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.818 [2024-07-11 02:46:55.094666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.818 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.094759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.094786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.094873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.094901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.094988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.095015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.095107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.095134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.095221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.095247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.095332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.095364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.095458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.095487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.095577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.095604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.095694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.095721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.095804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.095831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.095922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.095948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.096144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.096170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.096253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.096279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.096369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.096395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.096475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.096501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.096595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.096621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.096708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.096734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.096826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.096855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.096943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.096969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.097068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.097098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.097194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.097222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.097317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.097345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.097428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.097456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.097551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.097578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.097667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.097694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.097783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.097812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.097901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.097928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.098017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.098045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.098139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.098166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.098254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.098282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.098374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.098401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.098487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.098520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.098609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.098637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.098733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.098759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.098844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.098870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.098953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.098980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.099073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.099101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.099193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.819 [2024-07-11 02:46:55.099219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.819 qpair failed and we were unable to recover it. 00:41:04.819 [2024-07-11 02:46:55.099300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.099327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.099417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.099444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.099538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.099569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.099659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.099686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.099780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.099808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.099898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.099924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.100014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.100042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.100129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.100160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.100243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.100270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.100356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.100382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.100467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.100494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.100592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.100621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.100708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.100734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.100820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.100846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.100935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.100961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.101046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.101074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.101165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.101192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.101288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.101315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.101402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.101429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.101531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.101561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.101644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.101672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.101771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.101798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.101880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.101907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.101999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.102026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.102120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.102146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.102226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.102253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.102340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.102366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.102461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.102487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.102583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.102610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.102702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.102729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.102824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.102850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.102944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.102972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.103063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.103089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.103184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.103213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.103305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.103337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.103430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.103456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.103543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.103570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.103672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.103700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.820 [2024-07-11 02:46:55.103786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.820 [2024-07-11 02:46:55.103813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.820 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.103896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.103922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.104010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.104036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.104121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.104147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.104232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.104258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.104348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.104376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.104465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.104495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.104603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.104632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.104721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.104749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.104842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.104869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.104967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.104993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.105089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.105117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.105207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.105234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.105320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.105347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.105433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.105459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.105557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.105585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.105683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.105709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.105800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.105828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.105916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.105942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.106027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.106054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.106136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.106162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.106247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.106274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.106364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.106391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.106490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.106526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.106616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.106643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.106726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.106753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.106839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.106866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.106956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.106983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.107075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.107101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.107198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.107226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.107316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.107343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.107426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.107452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.107540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.107567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.107650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.107676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.107764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.107792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.107884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.107910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.107998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.108031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.108125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.108152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.108239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.108266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.108354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.821 [2024-07-11 02:46:55.108380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.821 qpair failed and we were unable to recover it. 00:41:04.821 [2024-07-11 02:46:55.108470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.108496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.108590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.108619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.108713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.108741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.108826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.108853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.108943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.108971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.109062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.109089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.109182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.109212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.109308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.109337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.109431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.109459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.109549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.109576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.109669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.109695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.109776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.109803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.109884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.109911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.110004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.110032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.110124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.110152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.110242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.110270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.110362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.110389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.110473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.110502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.110595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.110621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.110711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.110738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.110818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.110844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.110925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.110951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.111043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.111073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.111164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.111197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.111296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.111325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.111418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.111445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.111529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.111556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.111642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.111668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.111752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.111778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.111866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.111895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.111985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.112013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.112105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.822 [2024-07-11 02:46:55.112133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.822 qpair failed and we were unable to recover it. 00:41:04.822 [2024-07-11 02:46:55.112214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.112241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.112331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.112357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.112443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.112469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.112561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.112588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.112674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.112700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.112792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.112820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.112907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.112934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.113024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.113052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.113148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.113174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.113272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.113300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.113385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.113415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.113502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.113539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.113633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.113662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.113750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.113777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.113866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.113895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.113980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.114009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.114102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.114129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.114222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.114249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.114335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.114362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.114454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.114482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.114579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.114607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.114697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.114723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.114811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.114838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.114924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.114950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.115035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.115061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.115158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.115187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.115276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.115302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.115389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.115416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.115502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.115534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.115619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.115646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.115733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.115762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.115855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.115890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.115987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.116014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.116098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.116124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.116209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.116236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.116332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.116361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.116450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.116477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.823 [2024-07-11 02:46:55.116578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.823 [2024-07-11 02:46:55.116607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.823 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.116695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.116723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.116815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.116845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.116938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.116967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.117051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.117078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.117163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.117189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.117282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.117308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.117391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.117418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.117521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.117549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.117639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.117667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.117750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.117778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.117870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.117898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.117992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.118020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.118109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.118136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.118227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.118256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.118339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.118366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.118455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.118482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.118578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.118604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.118697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.118724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.118807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.118833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.118914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.118940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.119036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.119068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.119159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.119187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.119280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.119309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.119396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.119423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.119517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.119545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.119647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.119674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.119761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.119787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.119875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.119901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.119991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.120019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.120102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.120129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.120217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.120243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.120333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.120359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.120449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.120476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.120583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.120613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.120716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.120743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.120824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.120851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.120943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.120971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.121065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.121092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.121186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.121214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.121307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.121333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.824 [2024-07-11 02:46:55.121421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.824 [2024-07-11 02:46:55.121449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.824 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.121550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.121576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.121663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.121689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.121784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.121813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.121906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.121933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.122024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.122053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.122134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.122160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.122249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.122280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.122370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.122397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.122494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.122528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.122618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.122645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.122728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.122754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.122842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.122870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.122961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.122987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.123072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.123098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.123187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.123214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.123309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.123338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.123432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.123462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.123568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.123596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.123684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.123711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.123796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.123823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.123915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.123941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.124026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.124052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.124138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.124166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.124248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.124274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.124358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.124385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.124585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.124612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.124704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.124730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.124817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.124843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.124929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.124955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.125043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.125069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.125153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.125179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.125259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.125285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.125378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.125407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.125500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.125536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.125638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.125666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.125747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.125774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.125859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.125885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.125970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.125999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.126083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.126109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.126200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.825 [2024-07-11 02:46:55.126227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.825 qpair failed and we were unable to recover it. 00:41:04.825 [2024-07-11 02:46:55.126308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.126334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.126425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.126452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.126550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.126579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.126673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.126701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.126794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.126820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.126906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.126932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.127022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.127048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.127139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.127164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.127260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.127287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.127389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.127416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.127505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.127540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.127625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.127652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.127746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.127773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.127871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.127897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.127983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.128011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.128100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.128126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.128325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.128351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.128450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.128478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.128580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.128607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.128697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.128724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.128821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.128849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.128940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.128967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.129060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.129088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.129186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.129212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.129308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.129334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.129421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.129450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.129550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.129578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.129666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.129692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.129789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.129815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.129903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.129930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.130018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.130044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.130129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.130155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.130251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.130277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.130373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.130404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.130490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.826 [2024-07-11 02:46:55.130522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.826 qpair failed and we were unable to recover it. 00:41:04.826 [2024-07-11 02:46:55.130610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.130636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.130723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.130749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.130847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.130876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.130971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.130997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.131083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.131110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.131203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.131229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.131317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.131346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.131442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.131469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.131563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.131590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.131682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.131710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.131805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.131832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.131922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.131949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.132050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.132079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.132172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.132199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.132298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.132326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.132412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.132439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.132538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.132567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.132651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.132678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.132770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.132797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.132878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.132904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.132995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.133022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.133119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.133145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.133236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.133262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.133352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.133378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.133471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.133501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.133601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.133634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.133727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.133754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.133842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.133869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.133959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.133987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.134085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.134114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.134201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.134228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.134330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.134357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.134449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.134477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.134573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.134602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.134702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.134728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.134825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.134853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.134947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.134974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.135063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.135091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.135184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.135212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.135301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.135328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.135446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.827 [2024-07-11 02:46:55.135473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.827 qpair failed and we were unable to recover it. 00:41:04.827 [2024-07-11 02:46:55.135579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.135607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.135694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.135720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.135801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.135828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.135918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.135944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.136042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.136070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.136156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.136183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.136267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.136294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.136380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.136406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.136490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.136523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.136723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.136749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.136840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.136866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.136960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.137012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.137109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.137139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.137235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.137264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.137355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.137381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.137484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.137521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.137619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.137645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.137734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.137760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.137852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.137881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.137975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.138002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.138089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.138117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.138202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.138228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.138427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.138453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.138553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.138580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.138671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.138697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.138790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.138818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.138905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.138931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.139028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.139054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.139141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.139167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.139253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.139282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.139382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.139409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.139508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.139545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.139634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.139662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.139751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.139778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.139866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.139892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.139986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.140012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.140103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.140130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.140217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.140244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.140331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.140358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.140448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.140475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.140582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.828 [2024-07-11 02:46:55.140609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.828 qpair failed and we were unable to recover it. 00:41:04.828 [2024-07-11 02:46:55.140698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.140724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.140814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.140840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.140934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.140961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.141053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.141083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.141172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.141198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.141290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.141317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.141411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.141439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.141531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.141558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.141648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.141675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.141771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.141797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.141895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.141925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.142021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.142052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.142145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.142172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.142272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.142299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.142392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.142420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.142517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.142546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.142638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.142665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.142760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.142786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.142872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.142898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.142987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.143013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.143107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.143133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.143223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.143249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.143343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.143370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.143457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.143486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.143609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.143638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.143730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.143757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.143849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.143876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.143958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.143984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.144067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.144093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.144182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.144210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.144304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.144331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.144427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.144454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.144549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.144576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.144674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.144700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.144786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.144813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.144896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.144922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.145012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.145038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.145141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.145171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.145280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.145309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.145402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.145429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.145529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.829 [2024-07-11 02:46:55.145558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.829 qpair failed and we were unable to recover it. 00:41:04.829 [2024-07-11 02:46:55.145653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.145680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.145772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.145800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.145888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.145914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.146004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.146030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.146121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.146147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.146243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.146272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.146359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.146386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.146471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.146498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.146596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.146623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.146710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.146736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.146829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.146855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.146942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.146969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.147063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.147090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.147185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.147214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.147306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.147334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.147425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.147452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.147551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.147579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.147667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.147693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.147811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.147841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.147936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.147965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.148061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.148101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.148198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.148226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.148316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.148344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.148455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.148482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.148589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.148616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.148709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.148735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.148824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.148850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.148949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.148977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.149064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.149090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.149175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.149201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.149288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.149314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.149401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.149427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.149525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.149553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.149752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.149778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.149869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.149896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.149985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.830 [2024-07-11 02:46:55.150011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.830 qpair failed and we were unable to recover it. 00:41:04.830 [2024-07-11 02:46:55.150092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.150123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.150212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.150238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.150324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.150351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.150437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.150463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.150567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.150594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.150681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.150707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.150795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.150822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.150909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.150935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.151032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.151062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.151152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.151182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.151285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.151312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.151405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.151432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.151528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.151554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.151644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.151671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.151764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.151790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.151886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.151913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.151996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.152023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.152111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.152140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.152234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.152264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.152361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.152388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.152480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.152508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.152617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.152644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.152737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.152767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.152862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.152889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.152976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.153003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.153093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.153119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.153211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.153239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.153331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.153363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.153463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.153489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.153584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.153610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.153694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.153721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.153810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.153837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.153928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.153954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.154051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.154079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.154168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.154195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.154285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.154312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.154403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.154431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.154530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.154557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.154647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.154673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.154763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.154789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.154875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.831 [2024-07-11 02:46:55.154901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.831 qpair failed and we were unable to recover it. 00:41:04.831 [2024-07-11 02:46:55.155003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.155029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.155116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.155142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.155228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.155254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.155453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.155478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.155596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.155622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.155703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.155729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.155816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.155842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.155931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.155962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.156064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.156094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.156184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.156212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.156302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.156329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.156417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.156444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.156533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.156560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.156650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.156681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.156804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.156832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.156920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.156947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.157070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.157098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.157182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.157208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.157298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.157323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.157414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.157440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.157536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.157563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.157654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.157680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.157876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.157903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.157993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.158020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.158115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.158141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.158233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.158262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.158356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.158383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.158478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.158508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.158603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.158630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.158727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.158754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.158851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.158880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.158974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.159002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.159102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.159132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.159226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.159254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.159352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.159379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.159468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.159494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.159596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.159623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.159705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.159732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.159822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.159848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.159937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.159964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.160058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.832 [2024-07-11 02:46:55.160089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.832 qpair failed and we were unable to recover it. 00:41:04.832 [2024-07-11 02:46:55.160182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.160210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.160408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.160434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.160534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.160562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.160644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.160671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.160759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.160785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.160871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.160897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.160990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.161016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.161097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.161124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.161214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.161243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.161334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.161361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.161458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.161484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.161590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.161619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.161718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.161745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.161837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.161864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.161957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.161985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.162080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.162110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.162199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.162226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.162339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.162365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.162457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.162485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.162621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.162649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.162740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.162766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.162847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.162874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.162955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.162982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.163077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.163104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.163222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.163249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.163370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.163400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.163495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.163530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.163623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.163651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.163761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.163789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.163883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.163911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.164019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.164047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.164135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.164162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.164255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.164281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.164373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.164399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.164488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.164522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.164608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.164635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.164722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.164748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.164835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.164861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.164954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.164980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.165069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.165101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.165192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.833 [2024-07-11 02:46:55.165220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.833 qpair failed and we were unable to recover it. 00:41:04.833 [2024-07-11 02:46:55.165307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.165334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.165426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.165455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.165580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.165610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.165700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.165727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.165814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.165841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.165947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.165974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.166065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.166092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.166189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.166217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.166307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.166335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.166431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.166477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.166572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.166600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.166798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.166824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.166919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.166946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.167034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.167061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.167149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.167176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.167273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.167300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.167391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.167417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.167497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.167530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.167635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.167664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.167754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.167782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.167874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.167900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.167996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.168024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.168105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.168131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.168223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.168250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.168341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.168367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.168453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.168484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.168575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.168601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.168693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.168719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.168812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.168838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.169036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.169062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.169179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.169205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.169290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.169316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.169524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.169551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.169637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.169663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.169743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.169770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.169851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.169877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.170073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.170099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.170192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:04.834 [2024-07-11 02:46:55.170222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:04.834 qpair failed and we were unable to recover it. 00:41:04.834 [2024-07-11 02:46:55.170309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.170337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.170451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.170481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.170586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.170627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.170732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.170761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.170854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.170884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.170976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.171003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.171091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.171117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.171209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.171235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.171331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.171359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.171559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.171586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.171781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.171808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.171896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.171922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.172007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.172034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.172127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.172153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.172238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.172269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.172366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.172392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.172483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.172515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.172632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.172658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.172744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.172771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.172969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.172995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.173089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.173115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.173200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.173226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.173312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.173338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.173538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.173565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.173654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.173681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.173772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.173798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.173885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.173911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.174107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.174133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.174227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.174253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.174448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.174474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.110 qpair failed and we were unable to recover it. 00:41:05.110 [2024-07-11 02:46:55.174583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.110 [2024-07-11 02:46:55.174610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.174700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.174726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.174817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.174844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.174931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.174957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.175053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.175079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.175169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.175195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.175283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.175309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.175388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.175414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.175495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.175526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.175620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.175645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.175731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.175757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.175853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.175879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.175970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.175996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.176080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.176107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.176193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.176219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.176301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.176327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.176414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.176441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.176537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.176564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.176651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.176678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.176760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.176786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.176878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.176904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.176992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.177018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.177106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.177133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.177227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.177253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.177337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.177363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.177460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.177492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.177608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.177636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.177735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.177763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.177868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.177898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.177986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.178013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.178106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.178136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.178230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.178258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.178346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.178373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.178466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.178494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.178590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.178617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.178707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.178735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.178827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.178855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.178950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.178978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.179073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.179104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.179194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.179220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.179303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.179329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.179413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.179440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.179538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.179566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.179653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.179682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.179776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.179804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.179893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.179921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.180010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.180038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.180127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.180154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.180245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.180272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.180361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.180389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.180476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.180502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.180600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.180627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.180719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.111 [2024-07-11 02:46:55.180747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.111 qpair failed and we were unable to recover it. 00:41:05.111 [2024-07-11 02:46:55.180840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.180866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.180947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.180973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.181065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.181091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.181176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.181204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.181299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.181324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.181411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.181441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.181542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.181576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.181671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.181698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.181791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.181818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.181907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.181934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.182020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.182048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.182135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.182162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.182256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.182289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.182378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.182404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.182497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.182532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.182634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.182661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.182750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.182776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.182871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.182900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.182995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.183025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.183119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.183147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.183243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.183270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.183355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.183382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.183472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.183499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.183606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.183638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.183733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.183759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.183846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.183872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.183964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.183991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.184087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.184113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.184204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.184231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.184323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.184350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.184448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.184477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.184583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.184621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.184710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.184737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.184833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.184860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.184953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.184979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.185067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.185094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.185184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.185210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.185302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.185330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.185433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.185463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.185563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.185593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.185686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.185713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.185802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.185829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.185945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.185972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.186062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.186088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.186175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.186203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.186294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.186321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.186410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.186437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.186532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.186561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.186652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.186679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.186768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.186794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.186877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.186904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.186994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.187020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.187220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.112 [2024-07-11 02:46:55.187251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.112 qpair failed and we were unable to recover it. 00:41:05.112 [2024-07-11 02:46:55.187449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.187475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.187561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.187587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.187675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.187701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.187786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.187812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.187892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.187918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.188004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.188030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.188130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.188159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.188254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.188280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.188375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.188405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.188500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.188535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.188632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.188660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.188759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.188786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.188877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.188904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.189006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.189033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.189133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.189160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.189246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.189273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.189363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.189390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.189483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.189518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.189612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.189639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.189724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.189750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.189836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.189862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.189948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.189976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.190062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.190088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.190174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.190200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.190286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.190312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.190397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.190427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.190520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.190554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.190655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.190683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.190775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.190802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.190888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.190916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.191001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.191029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.191123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.191150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.191241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.191269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.191356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.191384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.191478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.191506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.191599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.191626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.191710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.191737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.191827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.191855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.191950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.191981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.192076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.192102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.192186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.192212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.192307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.192334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.192433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.192463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.192572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.192600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.192698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.113 [2024-07-11 02:46:55.192725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.113 qpair failed and we were unable to recover it. 00:41:05.113 [2024-07-11 02:46:55.192813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.192839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.192934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.192965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.193063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.193091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.193183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.193211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.193300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.193333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.193430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.193482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.193691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.193719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.193813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.193840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.193935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.193965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.194069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.194095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.194184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.194210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.194303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.194334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.194426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.194452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.194549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.194577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.194680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.194709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.194794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.194821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.194915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.194942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.195029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.195055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.195145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.195172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.195267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.195296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.195391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.195418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.195502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.195536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.195629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.195655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.195735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.195761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.195848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.195874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.195962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.195988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.196078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.196105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.196204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.196230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.196316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.196344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.196439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.196468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.196560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.196587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.196680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.196707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.196795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.196821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.196906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.196932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.197024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.197051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.197145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.197173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.197263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.197290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.197380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.197407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.197490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.197526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.197621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.197648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.197847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.197874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.197962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.197989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.198078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.198105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.198190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.198217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.198309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.198338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.198430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.198457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.198556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.198584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.198687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.198715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.198804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.198831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.198921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.198947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.199044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.199072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.199270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.199296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.199394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.199423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.199515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.114 [2024-07-11 02:46:55.199542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.114 qpair failed and we were unable to recover it. 00:41:05.114 [2024-07-11 02:46:55.199739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.199765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.199856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.199882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.199973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.200000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.200095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.200121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.200204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.200230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.200326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.200355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.200447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.200473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.200572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.200599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.200699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.200726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.200811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.200838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.200925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.200955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.201051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.201077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.201161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.201188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.201276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.201302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.201396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.201422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.201518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.201545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.201637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.201663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.201752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.201778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.201869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.201895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.201993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.202023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.202113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.202142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.202235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.202266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.202356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.202382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.202472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.202498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.202596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.202623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.202716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.202744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.202831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.202857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.202942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.202968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.203054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.203081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.203177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.203203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.203303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.203350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.203449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.203476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.203580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.203607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.203692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.203718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.203915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.203941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.204037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.204064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.204151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.204177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.204272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.204298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.204385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.204412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.204504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.204539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.204627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.204654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.204750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.204776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.204874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.204901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.204995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.205021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.205113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.205141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.205231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.205258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.205353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.205379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.205472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.205498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.205613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.205646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.205736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.205762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.205959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.205985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.206071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.206099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.206183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.115 [2024-07-11 02:46:55.206209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.115 qpair failed and we were unable to recover it. 00:41:05.115 [2024-07-11 02:46:55.206308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.206335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.206424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.206451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.206549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.206576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.206671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.206697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.206783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.206809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.206889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.206917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.207003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.207030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.207116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.207142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.207232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.207260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.207362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.207392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.207489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.207525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.207627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.207655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.207747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.207773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.207854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.207880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.207966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.207993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.208085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.208113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.208211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.208240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.208342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.208373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.208478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.208507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.208603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.208630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.208722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.208752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.208847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.208875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.208976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.209004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.209095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.209122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.209322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.209348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.209442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.209469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.209569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.209599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.209694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.209723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.209811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.209838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.209931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.209958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.210049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.210076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.210167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.210195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.210282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.210308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.210404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.210432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.210531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.210558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.210649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.210676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.210771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.210798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.210897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.210925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.211017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.211045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.211127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.211154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.211251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.211280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.211381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.211409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.211491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.211524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.211625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.211652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.211743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.211770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.211859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.211886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.211973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.211999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.212097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.212125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.212216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.212243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.212337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.212363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.212454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.212481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.212582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.116 [2024-07-11 02:46:55.212609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.116 qpair failed and we were unable to recover it. 00:41:05.116 [2024-07-11 02:46:55.212700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.212726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.212823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.212849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.212939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.212966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.213063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.213089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.213289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.213316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.213407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.213433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.213531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.213558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.213643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.213670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.213757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.213785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.213879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.213910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.214002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.214034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.214130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.214161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.214256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.214287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.214397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.214425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.214517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.214546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.214648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.214683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.214781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.214809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.214897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.214923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.215011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.215039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.215127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.215154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.215245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.215274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.215362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.215389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.215479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.215506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.215597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.215624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.215718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.215745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.215827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.215853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.215936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.215962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.216059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.216087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.216182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.216211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.216308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.216336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.216427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.216454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.216548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.216575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.216668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.216695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.216788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.216816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.216911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.216938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.217029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.217056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.217148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.217175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.217256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.217287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.217378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.217405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.217493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.217525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.217616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.217643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.217727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.217754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.217847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.217873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.217969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.217996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.218084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.218110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.218193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.218219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.117 [2024-07-11 02:46:55.218305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.117 [2024-07-11 02:46:55.218332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.117 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.218416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.218442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.218530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.218556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.218646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.218672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.218755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.218781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.218872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.218898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.219096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.219123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.219216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.219242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.219327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.219354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.219444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.219470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.219568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.219596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.219682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.219709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.219800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.219826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.219915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.219941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.220138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.220165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.220255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.220281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.220370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.220396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.220485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.220518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.220606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.220636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.220720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.220745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.220909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.220935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.221026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.221054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.221140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.221170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.221263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.221291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.221381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.221408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.221504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.221544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.221639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.221666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.221761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.221790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.221927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.221954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.222040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.222066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.222155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.222181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.222276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.222304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.222404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.222430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.222532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.222560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.222656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.222683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.222771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.222798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.222888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.222914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.223001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.223027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.223111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.223136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.223228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.223254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.223347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.223373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.223470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.223494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.223672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.223698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.223792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.223820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.223926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.223954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.224051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.224091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.224186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.224212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.224303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.224329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.224422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.224449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.224546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.224573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.224672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.224698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.224786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.224812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.224904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.224933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.118 [2024-07-11 02:46:55.225020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.118 [2024-07-11 02:46:55.225055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.118 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.225143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.225170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.225257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.225285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.225369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.225395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.225489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.225522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.225725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.225752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.225849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.225877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.225971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.225998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.226085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.226112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.226198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.226225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.226321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.226346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.226429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.226454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.226627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.226654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.226740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.226767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.226853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.226880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.226963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.226989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.227074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.227100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.227194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.227220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.227312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.227338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.227436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.227470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.227575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.227604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.227691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.227718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.227808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.227835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.227928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.227956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.228054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.228082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.228168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.228194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.228294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.228323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.228419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.228446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.228533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.228561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.228660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.228687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.228772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.228799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.228891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.228919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.229008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.229034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.229126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.229152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.229243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.229270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.229363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.229389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.229478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.229506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.229604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.229632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.229727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.229754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.229844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.229871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.229962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.229988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.230080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.230106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.230193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.230218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.230303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.230330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.230416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.230444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.230581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.230609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.230698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.230731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.230825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.230854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.230945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.230972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.231061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.231088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.231178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.231206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.231296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.231324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.231411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.231438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.231540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.119 [2024-07-11 02:46:55.231568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.119 qpair failed and we were unable to recover it. 00:41:05.119 [2024-07-11 02:46:55.231654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.231680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.231768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.231796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.231876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.231902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.231991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.232016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.232111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.232141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.232229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.232257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.232360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.232386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.232477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.232503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.232606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.232633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.232720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.232747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.232836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.232864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.232961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.232987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.233078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.233105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.233202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.233230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.233323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.233350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.233446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.233473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.233574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.233601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.233698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.233724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.233811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.233838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.233941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.233968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.234061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.234089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.234183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.234210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.234297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.234325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.234413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.234440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.234549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.234577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.234670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.234699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.234789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.234816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.234906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.234934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.235018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.235045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.235127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.235154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.235235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.235262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.235349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.235377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.235467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.235499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.235596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.235623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.235713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.235740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.235823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.235849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.236045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.236072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.236160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.236187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.236272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.236299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.236389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.236417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.236505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.236554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.236646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.236673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.236759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.236785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.236877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.236903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.236998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.237027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.237120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.237147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.120 qpair failed and we were unable to recover it. 00:41:05.120 [2024-07-11 02:46:55.237245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.120 [2024-07-11 02:46:55.237273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.237368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.237394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.237475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.237502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.237629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.237656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.237750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.237778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.237865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.237895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.237983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.238012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.238101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.238128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.238221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.238249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.238334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.238361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.238461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.238488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.238696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.238723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.238820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.238846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.238929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.238960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.239053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.239079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.239161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.239188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.239278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.239304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.239393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.239420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.239501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.239535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.239630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.239657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.239743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.239769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.239861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.239889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.239976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.240003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.240101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.240127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.240231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.240259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.240339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.240365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.240463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.240489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.240597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.240626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.240728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.240755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.240846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.240872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.241071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.241097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.241185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.241212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.241301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.241328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.241424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.241452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.241540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.241567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.241659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.241685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.241780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.241807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.241901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.241929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.242014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.242040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.242131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.242158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.242363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.242394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.242477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.242505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.242601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.242628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.242718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.242746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.242841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.242867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.242950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.242976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.243068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.243095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.243185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.243213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.243307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.243337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.243438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.243468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.243574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.243603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.243697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.243724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.121 [2024-07-11 02:46:55.243814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.121 [2024-07-11 02:46:55.243841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.121 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.243934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.243961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.244058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.244086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.244190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.244219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.244310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.244337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.244434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.244462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.244577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.244605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.244704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.244731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.244827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.244853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.244941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.244967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.245068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.245095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.245187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.245216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.245304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.245330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.245414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.245440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.245535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.245563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.245665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.245696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.245788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.245815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.245906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.245934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.246034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.246063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.246154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.246181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.246318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.246345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.246440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.246467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.246564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.246591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.246680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.246707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.246804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.246831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.246926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.246953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.247045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.247073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.247166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.247195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.247288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.247315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.247408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.247437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.247534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.247576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.247667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.247694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.247780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.247807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.247895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.247922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.248006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.248032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.248119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.248145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.248239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.248266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.248359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.248385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.248473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.248499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.248606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.248632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.248726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.248753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.248856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.248882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.248975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.249002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.249099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.249126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.249217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.249245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.249334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.249360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.249479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.249520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.249623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.249664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.249762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.249790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.249874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.249901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.249991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.250018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.250110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.250140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.250240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.250268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.250360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.122 [2024-07-11 02:46:55.250388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.122 qpair failed and we were unable to recover it. 00:41:05.122 [2024-07-11 02:46:55.250530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.250560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.250650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.250687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.250784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.250812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.250909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.250936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.251031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.251058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.251144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.251170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.251261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.251287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.251368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.251394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.251481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.251507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.251605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.251631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.251722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.251748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.251829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.251854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.251981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.252008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.252099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.252126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.252211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.252238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.252336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.252362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.252450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.252478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.252583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.252611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.252697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.252724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.252822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.252849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.252952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.252979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.253075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.253104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.253202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.253228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.253317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.253344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.253436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.253463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.253566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.253593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.253686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.253712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.253794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.253821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.253913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.253943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.254026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.254053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.254134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.254160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.254253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.254279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.254385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.254411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.254504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.254540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.254637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.254664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.254754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.254781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.254864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.254890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.254976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.255002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.255095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.255124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.255217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.255245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.255339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.255366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.255458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.255486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.255589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.255616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.255707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.255734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.255817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.255844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.255934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.255962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.256044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.123 [2024-07-11 02:46:55.256070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.123 qpair failed and we were unable to recover it. 00:41:05.123 [2024-07-11 02:46:55.256169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.256195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.256286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.256312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.256403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.256431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.256534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.256564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.256654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.256681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.256770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.256796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.256890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.256916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.257005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.257033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.257130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.257162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.257254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.257282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.257401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.257430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.257543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.257577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.257676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.257703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.257801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.257828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.258182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.258214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.258311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.258339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.258430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.258457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.258570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.258601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.258690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.258717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.258808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.258837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.258932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.258958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.259050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.259077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.259173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.259200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.259296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.259322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.259417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.259446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.259537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.259567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.259657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.259684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.259778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.259805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.259892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.259919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.260007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.260035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.260134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.260162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.260251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.260277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.260368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.260395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.260501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.260546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.260642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.260669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.260764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.260791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.260879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.260905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.260994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.261021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.261122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.261149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.261236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.261262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.261356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.261385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.261479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.261506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.261610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.261636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.261730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.261756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.261851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.261879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.261965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.261991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.262085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.262113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.262209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.262240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.262333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.262364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.262458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.262486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.262591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.262618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.262705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.262733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.262829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.262856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.262947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.124 [2024-07-11 02:46:55.262973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.124 qpair failed and we were unable to recover it. 00:41:05.124 [2024-07-11 02:46:55.263057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.263082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.263180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.263205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.263300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.263326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.263413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.263439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.263531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.263557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.263646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.263672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.263761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.263789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.263875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.263902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.264000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.264026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.264116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.264142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.264233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.264260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.264343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.264369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.264447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.264473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.264582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.264608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.264697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.264722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.264804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.264830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.264914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.264940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.265025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.265052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.265145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.265172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.265260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.265287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.265424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.265450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.265535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.265567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.265655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.265681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.265766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.265791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.265876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.265902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.265990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.266017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.266113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.266140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.266237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.266263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.266348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.266375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.266467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.266493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.266591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.266617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.266712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.266742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.266837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.266866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.266963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.266994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.267093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.267120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.267210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.267237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.267330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.267356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.267443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.267469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.267567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.267594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.267678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.267703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.267791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.267820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.267917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.267943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.268046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.268075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.268172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.268201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.268294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.268333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.268457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.268486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.268592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.268621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.268710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.268737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.268838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.268869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.268968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.268996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.269084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.269112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.269198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.269224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.269305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.269330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.269417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.125 [2024-07-11 02:46:55.269443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.125 qpair failed and we were unable to recover it. 00:41:05.125 [2024-07-11 02:46:55.269536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.269563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.269651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.269681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.269772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.269798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.269890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.269916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.270002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.270027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.270124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.270149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.270232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.270258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.270355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.270380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.270476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.270502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.270595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.270620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.270707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.270734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.270819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.270844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.270933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.270960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.271045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.271070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.271160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.271186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.271273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.271299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.271388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.271413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.271502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.271545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.271637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.271664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.271796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.271823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.271911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.271938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.272031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.272062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.272150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.272180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.272280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.272308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.272400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.272427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.272521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.272548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.272650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.272675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.272762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.272789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.272878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.272904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.272993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.273021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.273108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.273133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.273226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.273254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.273354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.273381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.273469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.273495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.273590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.273616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.273718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.273746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.273838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.273866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.273958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.273985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.274075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.274101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.274194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.274220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.274300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.274326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.274418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.274445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.274546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.274580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.274685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.274714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.274813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.274843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.274938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.274967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.275062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.275089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.275181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.275209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.275310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.275351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.126 [2024-07-11 02:46:55.275469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.126 [2024-07-11 02:46:55.275499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.126 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.275618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.275646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.275744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.275771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.275861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.275887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.275994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.276038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.276132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.276158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.276244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.276271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.276361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.276390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.276484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.276519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.276617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.276645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.276745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.276772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.276868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.276897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.276987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.277014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.277110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.277137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.277225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.277252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.277337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.277363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.277449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.277476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.277582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.277611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.277706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.277733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.277821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.277847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.277935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.277961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.278072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.278101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.278196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.278225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.278318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.278344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.278429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.278456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.278562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.278589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.278687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.278716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.278809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.278836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.278918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.278944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.279029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.279055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.279149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.279177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.279269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.279296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.279386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.279414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.279500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.279544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.279631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.279658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.279745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.279772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.279857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.279884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.279968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.279994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.280088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.280115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.280201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.280232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.280329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.280357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.280441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.280467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.280562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.280589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.280678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.280705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.280787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.280814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.280905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.280932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.281022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.281050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.281140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.281168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.281260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.281290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.281493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.281528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.281631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.127 [2024-07-11 02:46:55.281658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.127 qpair failed and we were unable to recover it. 00:41:05.127 [2024-07-11 02:46:55.281748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.281775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.281867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.281894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.281992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.282020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.282110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.282136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.282225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.282254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.282352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.282381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.282585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.282614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.282709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.282735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.282829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.282856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.282943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.282969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.283058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.283084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.283172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.283200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.283289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.283317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.283410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.283438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.283532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.283559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.283647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.283678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.283776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.283802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.283897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.283924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.284009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.284036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.284127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.284153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.284240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.284266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.284350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.284376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.284464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.284492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.284592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.284619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.284704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.284730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.284814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.284841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.284927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.284956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.285046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.285072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.285169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.285197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.285284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.285310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.285410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.285440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.285531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.285559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.285647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.285676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.285766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.285793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.285881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.285907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.285996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.286024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.286112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.286139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.286230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.286257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.286356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.286385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.286472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.286499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.286598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.286626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.286720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.286747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.286958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.286986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.287074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.287101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.287195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.287223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.287309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.287336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.287434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.287461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.287553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.287579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.287678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.287703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.287794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.287820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.287911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.287936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.288022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.288048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.288145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.288171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.128 [2024-07-11 02:46:55.288281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.128 [2024-07-11 02:46:55.288307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.128 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.288397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.288423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.288563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.288593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.288694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.288720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.288813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.288840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.288934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.288960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.289050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.289076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.289164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.289190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.289278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.289303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.289402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.289431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.289531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.289569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.289669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.289696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.289787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.289816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.289939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.289970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.290064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.290091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.290182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.290208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.290307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.290334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.290425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.290452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.290546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.290573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.290665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.290692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.290783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.290810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.290907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.290933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.291026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.291052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.291139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.291165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.291249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.291275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.291362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.291387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.291477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.291505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.291617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.291643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.291734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.291761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.291851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.291883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.291983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.292010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.292105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.292132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.292218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.292244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.292327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.292363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.292457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.292484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.292588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.292616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.292707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.292733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.292826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.292853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.292946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.292973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.293060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.293087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.293178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.293205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.293294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.293321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.293416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.293446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.293552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.293583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.293679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.293706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.293800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.293829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.293924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.293952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.294054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.294081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.294172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.294201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.294300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.294327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.294413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.294439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.294532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.294559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.294659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.294686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.129 [2024-07-11 02:46:55.294779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.129 [2024-07-11 02:46:55.294810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.129 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.294908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.294935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.295016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.295043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.295135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.295162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.295250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.295276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.295371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.295398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.295488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.295521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.295608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.295634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.295732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.295759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.295849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.295876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.295958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.295984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.296080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.296110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.296200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.296226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.296318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.296345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.296441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.296467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.296561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.296589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.296686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.296717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.296803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.296830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.296915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.296940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.297041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.297070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.297165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.297193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.297282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.297309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.297401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.297427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.297521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.297549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.297747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.297774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.297872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.297898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.297980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.298007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.298098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.298126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.298215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.298243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.298328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.298356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.298446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.298473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.298566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.298595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.298683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.298710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.298817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.298845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.298941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.298970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.299069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.299097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.299193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.299220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.299309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.299335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.299417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.299444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.299539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.299566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.299656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.299683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.299774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.299801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.299890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.299915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.300005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.300040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.300128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.300154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.300249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.300276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.130 [2024-07-11 02:46:55.300366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.130 [2024-07-11 02:46:55.300392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.130 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.300483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.300523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.300609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.300640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.300735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.300761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.300846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.300872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.300959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.300985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.301070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.301094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.301176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.301201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.301289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.301322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.301420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.301446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.301532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.301559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.301658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.301685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.301774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.301800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.301883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.301909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.302002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.302028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.302109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.302135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.302225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.302250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.302331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.302355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.302438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.302464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.302564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.302591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.302682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.302710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.302799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.302826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.302921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.302947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.303036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.303062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.303144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.303174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.303260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.303285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.303382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.303408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.303495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.303536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.303623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.303649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.303740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.303766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.303857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.303883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.303968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.303999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.304085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.304109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.304193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.304219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.304308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.304334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.304428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.304454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.304548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.304575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.304661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.304687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.304780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.304807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.304904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.304933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.305026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.305053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.305145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.305172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.305257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.305284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.305374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.305401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.305486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.305531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.305625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.305653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.305740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.305767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.305853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.305881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.305972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.306001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.306101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.306132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.306223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.306251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.306349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.306382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.306480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.306515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.131 [2024-07-11 02:46:55.306618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.131 [2024-07-11 02:46:55.306651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.131 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.306751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.306785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.306878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.306906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.306999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.307027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.307117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.307143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.307232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.307258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.307354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.307380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.307462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.307488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.307598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.307626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.307711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.307738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.307836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.307862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.307951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.307978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.308066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.308093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.308180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.308207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.308293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.308321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.308413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.308441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.308534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.308564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.308657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.308685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.308773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.308800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.308898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.308924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.309024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.309051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.309137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.309164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.309249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.309275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.309358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.309384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.309469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.309494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.309605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.309636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.309716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.309741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.309839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.309865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.309956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.309986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.310083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.310110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.310202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.310229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.310320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.310347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.310439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.310466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.310563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.310590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.310676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.310704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.310786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.310813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.310901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.310927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.311010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.311036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.311127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.311156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.311261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.311288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.311379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.311404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.311492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.311524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.311618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.311644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.311731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.311757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.311840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.311867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.311964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.311990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.312074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.312100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.312187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.312214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.312303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.312330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.312416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.312443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.312528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.312554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.312659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.312686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.312786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.312820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.132 [2024-07-11 02:46:55.312956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.132 [2024-07-11 02:46:55.312984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.132 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.313071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.313099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.313184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.313211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.313307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.313336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.313425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.313451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.313547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.313577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.313670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.313696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.313787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.313814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.313899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.313925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.314020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.314046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.314126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.314154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.314250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.314277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.314373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.314401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.314495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.314531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.314626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.314652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.314744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.314774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.314866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.314894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.314994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.315022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.315114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.315140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.315230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.315256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.315350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.315378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.315464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.315490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.315595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.315623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.315711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.315740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.315857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.315884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.315974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.316000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.316104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.316132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.316226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.316255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.316355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.316382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.316473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.316500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.316606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.316632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.316718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.316745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.316835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.316863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.316958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.316988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.317071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.317098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.317184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.317211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.317300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.317327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.317422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.317449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.317593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.317622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.317706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.317732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.317824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.317851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.317940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.317966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.318055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.318083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.318180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.318206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.318295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.318322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.318410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.318437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.318524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.133 [2024-07-11 02:46:55.318550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.133 qpair failed and we were unable to recover it. 00:41:05.133 [2024-07-11 02:46:55.318630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.318656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.318739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.318767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.318857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.318884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.318970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.318998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.319085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.319113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.319197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.319223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.319320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.319347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.319437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.319464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.319566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.319599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.319688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.319715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.319800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.319826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.319911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.319937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.320022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.320047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.320136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.320164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.320249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.320276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.320361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.320391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.320479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.320506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.320605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.320632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.320716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.320744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.320825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.320856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.320956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.320982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.321078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.321106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.321193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.321222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.321311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.321338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.321418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.321445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.321526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.321553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.321644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.321670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.321762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.321790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.321882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.321909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.321996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.322023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.322105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.322131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.322215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.322243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.322334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.322360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.322452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.322479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.322575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.322602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.322685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.322712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.322799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.322829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.322914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.322941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.323032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.323057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.323141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.323170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.323255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.323284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.323379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.323408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.323541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.323569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.323668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.323696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.323792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.323819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.323906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.323933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.324017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.324045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.324131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.324157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.324245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.324274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.324364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.324390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.324480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.324507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.324609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.324636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.134 [2024-07-11 02:46:55.324715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.134 [2024-07-11 02:46:55.324742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.134 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.324828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.324855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.324940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.324967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.325051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.325077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.325160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.325186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.325281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.325309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.325390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.325416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.325503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.325536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.325631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.325658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.325743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.325770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.325857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.325884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.325967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.325993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.326084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.326110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.326193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.326219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.326299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.326326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.326413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.326439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.326538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.326566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.326668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.326696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.326784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.326810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.326892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.326919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.327010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.327036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.327127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.327154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.327240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.327265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.327363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.327391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.327484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.327522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.327622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.327648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.327730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.327757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.327854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.327881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.327970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.327997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.328089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.328117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.328199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.328225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.328307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.328334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.328427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.328454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.328549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.328577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.328661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.328692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.328781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.328808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.328895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.328922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.329016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.329042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.329123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.329148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.329249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.329278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.329360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.329387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.329469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.329495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.329592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.329619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.329717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.329744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.329832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.329860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.329945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.329974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.330064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.330091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.330183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.330212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.330308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.330336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.330422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.330449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.330542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.330577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.330660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.330686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.330780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.330806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.330902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.135 [2024-07-11 02:46:55.330929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.135 qpair failed and we were unable to recover it. 00:41:05.135 [2024-07-11 02:46:55.331014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.331040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.331121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.331147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.331232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.331260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.331343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.331370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.331458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.331485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.331595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.331623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.331710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.331736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.331826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.331856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.331948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.331974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.332062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.332091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.332183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.332210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.332298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.332325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.332406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.332433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.332523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.332551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.332645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.332674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.332769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.332796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.332875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.332902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.332990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.333018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.333108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.333136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.333227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.333253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.333344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.333370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.333461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.333487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.333631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.333659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.333746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.333858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.333884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.333965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.333992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.334088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.334114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.334204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.334230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.334310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.334336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.334427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.334456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.334547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.334579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.334678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.334704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.334795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.334822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.334901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.334927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.335013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.335046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.335139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.335166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.335259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.335288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.335375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.335402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.335495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.335536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.335637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.335665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.335761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.335789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.335878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.335906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.335996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.336023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.336115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.336141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.336273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.336299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.336390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.336416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.336504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.336541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.136 [2024-07-11 02:46:55.336638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.136 [2024-07-11 02:46:55.336665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.136 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.336756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.336782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.336872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.336898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.337029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.337055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.337137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.337163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.337248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.337274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.337368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.337397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.337486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.337522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.337667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.337695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.337777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.337804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.337900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.337929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.338014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.338041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.338126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.338153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.338242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.338268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.338349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.338380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.338464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.338490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.338589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.338615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.338702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.338728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.338817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.338843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.338925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.338952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.339032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.339058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.339144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.339172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.339262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.339288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.339379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.339406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.339492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.339526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.339610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.339636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.339721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.339747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.339839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.339866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.339967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.339995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.340086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.340112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.340193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.340221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.340311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.340340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.340439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.340467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.340564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.340592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.340683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.340711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.340800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.340827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.340920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.340948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.341042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.341070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.341154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.341180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.341267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.341295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.341385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.341412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.341498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.341531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.341629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.341657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.341744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.341771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.341858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.341885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.341972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.342000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.342085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.342111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.342206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.342233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.342321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.342348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.342444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.342471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.342560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.342586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.342679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.342704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.342795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.342822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.137 [2024-07-11 02:46:55.342905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.137 [2024-07-11 02:46:55.342931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.137 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.343025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.343052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.343142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.343169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.343258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.343284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.343374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.343403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.343499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.343534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.343617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.343644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.343728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.343754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.343847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.343873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.343961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.343988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.344077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.344104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.344188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.344214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.344301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.344327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.344421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.344448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.344568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.344594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.344688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.344714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.344794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.344821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.344909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.344935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.345015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.345041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.345120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.345146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.345227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.345252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.345338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.345365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.345448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.345475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.345583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.345611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.345698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.345724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.345806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.345832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.345912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.345938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.346018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.346045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.346125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.346151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.346247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.346273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.346363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.346389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.346478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.346507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.346618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.346644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.346724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.346751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.346839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.346867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.346954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.346983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.347072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.347099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.347187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.347213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.347346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.347374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.347466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.347493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.347597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.347625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.347707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.347734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.347826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.347853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.347933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.347960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.348052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.348081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.348164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.348191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.348290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.348318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.348451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.348478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.348588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.348617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.348707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.348735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.348824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.348850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.348941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.348968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.349052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.349078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.349166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.349194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.138 qpair failed and we were unable to recover it. 00:41:05.138 [2024-07-11 02:46:55.349280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.138 [2024-07-11 02:46:55.349306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.349392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.349423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.349504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.349535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.349624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.349650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.349747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.349776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.349858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.349885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.349977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.350005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.350095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.350121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.350257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.350286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.350380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.350408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.350497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.350531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.350628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.350654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.350734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.350760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.350841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.350867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.350955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.350983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.351080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.351107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.351195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.351222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.351308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.351335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.351423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.351450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.351567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.351606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.351700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.351726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.351813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.351839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.351927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.351953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.352041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.352068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.352151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.352177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.352263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.352292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.352379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.352406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.352491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.352527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.352667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.352700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.352839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.352866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.352948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.352974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.353062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.353089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.353184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.353212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.353306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.353333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.353414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.353440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.353523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.353551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.353652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.353679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.353772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.353799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.353878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.353905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.353990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.354016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.354108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.354134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.354219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.354245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.354345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.354372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.354459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.354486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.354576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.354605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.354698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.354726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.354811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.354837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.354924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.354952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.355040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.355066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.355151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.355177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.355264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.355290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.355371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.355397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.139 [2024-07-11 02:46:55.355483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.139 [2024-07-11 02:46:55.355515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.139 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.355611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.355639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.355728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.355755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.355890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.355920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.356004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.356031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.356118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.356146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.356229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.356256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.356339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.356366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.356448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.356475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.356572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.356600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.356693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.356721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.356807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.356834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.356918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.356944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.357042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.357068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.357201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.357228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.357323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.357351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.357442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.357476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.357580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.357608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.357698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.357724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.357816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.357846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.357938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.357964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.358054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.358081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.358163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.358190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.358277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.358305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.358387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.358416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.358519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.358548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.358680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.358707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.358797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.358823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.358903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.358930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.359019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.359046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.359147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.359175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.359267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.359296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.359379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.359406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.359501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.359537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.359632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.359659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.359747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.359776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.359859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.359885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.359970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.359997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.360083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.360112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.360206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.360234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.360329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.360356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.360446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.360473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.360564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.360591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.360685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.360719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.360806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.360834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.360917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.360944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.361035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.361062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.361154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.140 [2024-07-11 02:46:55.361182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.140 qpair failed and we were unable to recover it. 00:41:05.140 [2024-07-11 02:46:55.361272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.361300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.361386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.361413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.361495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.361526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.361615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.361644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.361736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.361763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.361859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.361886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.361975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.362002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.362090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.362117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.362200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.362227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.362321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.362350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.362440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.362467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.362573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.362599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.362683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.362710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.362796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.362824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.362925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.362952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.363043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.363071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.363166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.363195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.363286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.363313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.363405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.363432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.363524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.363552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.363638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.363665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.363757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.363784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.363871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.363899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.363989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.364015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.364101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.364128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.364216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.364242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.364333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.364361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.364453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.364480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.364571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.364598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.364692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.364720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.364801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.364827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.364913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.364941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.365025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.365051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.365133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.365159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.365249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.365276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.365363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.365394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.365480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.365506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.365609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.365637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.365731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.365757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.365848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.365875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.365968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.365995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.366089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.366116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.366208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.366238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.366339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.366368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.366452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.366478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.366576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.366603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.366690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.366716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.366797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.366823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.366909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.366935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.367029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.367058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.367158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.367185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 [2024-07-11 02:46:55.367272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.367299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.141 qpair failed and we were unable to recover it. 00:41:05.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1980927 Killed "${NVMF_APP[@]}" "$@" 00:41:05.141 [2024-07-11 02:46:55.367388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.141 [2024-07-11 02:46:55.367423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.367508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.367542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.367634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.367661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.367751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.367778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.367866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.367894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.367981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.368010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:41:05.142 [2024-07-11 02:46:55.368094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.368122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.368203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.368230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.368320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.368347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:41:05.142 [2024-07-11 02:46:55.368437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.368474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.368577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.368604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.368699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.368726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:05.142 [2024-07-11 02:46:55.368808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.368835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.368934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.368960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:05.142 [2024-07-11 02:46:55.369047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.369076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.369165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.369192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.369274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.369302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:05.142 [2024-07-11 02:46:55.369399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.369429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.369524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.369565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.369662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.369689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.369785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.369813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.369900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.369931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.370065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.370094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.370183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.370210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.370292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.370320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.370412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.370440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.370530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.370558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.370643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.370671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.370758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.370785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.370871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.370898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.370984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.371011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.371103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.371131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.371219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.371245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.371338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.371367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.371452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.371479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.371576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.371605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.371699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.371730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.371823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.371850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.371938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.371967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.372051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.372077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.372166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.372194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.372280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.372308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.372399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.372425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.372519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.372546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.372639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.372667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.372761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.372789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.372872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.372899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.372989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.373016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.373106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.373134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.373229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.373257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.373353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.142 [2024-07-11 02:46:55.373379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.142 qpair failed and we were unable to recover it. 00:41:05.142 [2024-07-11 02:46:55.373460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.373486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.373579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.373605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.373686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.373712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.373797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.373826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1981357 00:41:05.143 [2024-07-11 02:46:55.373930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.373958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1981357 00:41:05.143 [2024-07-11 02:46:55.374049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.374080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.374215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.374243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1981357 ']' 00:41:05.143 [2024-07-11 02:46:55.374327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.374354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:05.143 [2024-07-11 02:46:55.374443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.374471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.374584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:05.143 [2024-07-11 02:46:55.374612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.374703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.374730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:05.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:05.143 [2024-07-11 02:46:55.374816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.374843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:05.143 [2024-07-11 02:46:55.374931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.374958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:05.143 [2024-07-11 02:46:55.375056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.375086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.375171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.375201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.375298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.375326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.375415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.375442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.375530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.375558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.375650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.375678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.375767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.375802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.375888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.375915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.376004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.376031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.376136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.376165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.376259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.376287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.376370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.376397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.376479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.376504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.376594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.376620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.376719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.376746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.376838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.376864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.376949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.376976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.377068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.377097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.377199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.377235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.377333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.377363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.377463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.377490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.377597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.377627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.377721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.377750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.377846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.377877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.377968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.377996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.378088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.378117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.378205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.378233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.143 [2024-07-11 02:46:55.378319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.143 [2024-07-11 02:46:55.378347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.143 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.378437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.378466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.378569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.378597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.378691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.378719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.378814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.378842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.378929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.378956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.379048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.379080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.379184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.379215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.379312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.379342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.379436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.379464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.379561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.379590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.379673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.379700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.379790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.379818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.379902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.379929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.380020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.380050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.380149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.380181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.380280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.380310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.380405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.380433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.380524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.380552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.380650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.380678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.380778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.380806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.380889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.380918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.381010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.381038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.381131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.381160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.381254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.381282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.381383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.381418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.381533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.381563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.381662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.381691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.381789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.381816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.381903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.381931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.382029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.382058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.382151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.382179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.382271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.382302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.382399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.382434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.382536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.382565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.382656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.382685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.382769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.382797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.382886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.382914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.383000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.383027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.383115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.383142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.383242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.383271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.383356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.383384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.383482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.383519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.383618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.383647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.383739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.383767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.383861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.383890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.383979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.384007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.384107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.384135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.384224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.384251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.384342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.384369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.384457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.384484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.384587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.384617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.384708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.384736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.384825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.384853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.384953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.384981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.385067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.385098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.385193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.385221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.385312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.385339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.144 qpair failed and we were unable to recover it. 00:41:05.144 [2024-07-11 02:46:55.385429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.144 [2024-07-11 02:46:55.385458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.385566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.385596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.385694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.385723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.385807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.385835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.385925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.385953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.386045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.386074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.386165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.386193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.386283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.386312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.386400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.386428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.386521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.386550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.386644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.386672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.386764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.386792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.386880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.386910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.387007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.387036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.387134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.387163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.387248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.387280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.387379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.387409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.387507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.387552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.387646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.387675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.387766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.387794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.387881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.387909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.388005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.388032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.388119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.388146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.388230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.388257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.388343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.388370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.388464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.388491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.388587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.388614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.388700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.388727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.388815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.388842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.388937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.388966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.389056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.389086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.389177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.389204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.389294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.389322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.389412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.389440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.389531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.389559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.389650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.389677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.389767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.389795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.389881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.389908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.390004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.390032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.390121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.390149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.390237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.390264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.390348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.390375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.390468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.390499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.390600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.390628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.390710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.390738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.390836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.390864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.390952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.390978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.391062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.391091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.391188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.391219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.391323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.391352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.391439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.391467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.391566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.391594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.391683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.391711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.391797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.391824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.391910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.391937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.392030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.392057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.392151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.392178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.392274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.392302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.392391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.392419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.392505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.145 [2024-07-11 02:46:55.392539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.145 qpair failed and we were unable to recover it. 00:41:05.145 [2024-07-11 02:46:55.392629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.392659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.392772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.392800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.392892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.392920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.393009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.393038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.393135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.393164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.393254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.393281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.393371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.393399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.393488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.393522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.393608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.393635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.393727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.393755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.393840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.393867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.393960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.393989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.394075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.394103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.394198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.394226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.394315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.394345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.394453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.394481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.394586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.394614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.394698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.394725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.394815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.394843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.394942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.394974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.395068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.395096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.395187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.395215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.395312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.395345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.395434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.395466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.395568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.395597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.395686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.395714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.395798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.395824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.395919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.395947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.396036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.396064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.396154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.396183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.396272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.396300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.396394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.396424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.396524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.396556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.396647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.396675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.396771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.396799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.396890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.396919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.397019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.397051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.397144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.397173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.397268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.397296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.397380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.397407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.397491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.397529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.397623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.397651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.397745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.397773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.397862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.397889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.397977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.398004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.398103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.398137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.398233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.398263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.398357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.398388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.146 qpair failed and we were unable to recover it. 00:41:05.146 [2024-07-11 02:46:55.398484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.146 [2024-07-11 02:46:55.398609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.398703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.398736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.398824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.398852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.398943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.398972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.399066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.399094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.399186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.399216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.399308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.399338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.399427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.399456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.399550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.399579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.399673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.399700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.399788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.399816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.399903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.399931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.400025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.400052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.400142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.400169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.400254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.400282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.400378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.400407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.400494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.400538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.400640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.400669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.400755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.400783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.400877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.400906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.400994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.401021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.401117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.401146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.401230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.401258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.401353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.401388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.401486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.401519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.401612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.401641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.401738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.401766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.401859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.401888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.401981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.402013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.402106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.402137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.402234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.402262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.402351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.402378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.402473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.402500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.402608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.402636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.402727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.402755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.402853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.402881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.402967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.402995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.403080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.403108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.403204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.403237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.403326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.403355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.403446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.403474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.403575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.403604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.403705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.403734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.403827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.403857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.403950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.403979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.404072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.404100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.404192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.404220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.404318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.404347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.404438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.404470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.404574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.404603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.404696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.404724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.404818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.404846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.404936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.404965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.405049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.405076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.405172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.405200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.405295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.405323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.405410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.405437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.405536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.405565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.405652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.405680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.405762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.405789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.405874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.405901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.405985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.147 [2024-07-11 02:46:55.406014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.147 qpair failed and we were unable to recover it. 00:41:05.147 [2024-07-11 02:46:55.406104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.406133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.406226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.406257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.406348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.406376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.406463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.406491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.406598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.406628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.406713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.406742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.406828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.406861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.406960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.406988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.407089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.407120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.407218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.407246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.407342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.407371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.407468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.407497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.407597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.407624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.407718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.407747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.407840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.407871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.407973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.408005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.408104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.408133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.408224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.408252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.408348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.408377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.408470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.408498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.408597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.408625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.408717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.408744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.408831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.408858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.408950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.408979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.409071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.409098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.409187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.409213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.409300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.409327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.409414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.409441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.409529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.409557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.409648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.409675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.409766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.409794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.409882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.409910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.409996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.410023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.410125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.410160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.410259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.410287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.410382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.410411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.410502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.410536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.410631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.410658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.410747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.410774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.410864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.410890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.410976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.411002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.411090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.411117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.411211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.411240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.411329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.411356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.411447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.411476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.411572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.411602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.411703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.411733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.411827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.411853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.411945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.411972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.412058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.412085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.412172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.412199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.412288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.412316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.412403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.412432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.412525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.412552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.412642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.412671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.412762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.412789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.412881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.412909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.412990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.413017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.413102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.413131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.413217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.413246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.148 [2024-07-11 02:46:55.413338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.148 [2024-07-11 02:46:55.413365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.148 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.413456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.413484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.413580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.413607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.413691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.413718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.413807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.413834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.413929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.413955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.414042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.414068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.414158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.414185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.414270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.414296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.414382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.414408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.414497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.414534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.414634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.414661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.414750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.414776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.414860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.414886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.414980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.415007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.415091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.415118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.415204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.415230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.415317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.415344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.415437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.415467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.415562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.415590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.415678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.415705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.415796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.415824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.415912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.415941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.416028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.416054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.416150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.416176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.416263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.416289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.416376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.416402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.416489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.416525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.416619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.416648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.416737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.416763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.416853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.416880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.416963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.416990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.417075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.417102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.417185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.417214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.417304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.417330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.417423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.417450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.417539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.417566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.417656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.417682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.417769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.417795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.417875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.417901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.417987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.418017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.418104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.418130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.418217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.418244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.418327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.418353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.418437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.418463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.418550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.418577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.418671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.418697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.418784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.418810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.418892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.418918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.419010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.419037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.419129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.419160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.419254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.419283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.419372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.419399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.419484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.419515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.419610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.149 [2024-07-11 02:46:55.419637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.149 qpair failed and we were unable to recover it. 00:41:05.149 [2024-07-11 02:46:55.419725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.419752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.419844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.419872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.419963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.419991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.420087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.420116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.420205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.420232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.420323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.420349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.420440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.420467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.420568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.420596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.420687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.420714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.420800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.420827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.420914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.420943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.421030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.421058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.421143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.421173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.421262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.421289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.421376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.421402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.421490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.421532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.421614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.421641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.421726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.421752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.421843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.421872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.421963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.421991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.422080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.422107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.422193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.422219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.422319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.422345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.422435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.422463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.422562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.422589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.422681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.422708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.422799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.422826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.422908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.422934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.423027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.423057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.423150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.423179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.423264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.423293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.423384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.423411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.423500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.423537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.423628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.423656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.423738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.423765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.423865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.423892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.423978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.424005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.424093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.424121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.424235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.424264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.424354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.424393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.424487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.424526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.424631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.424658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.424653] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:41:05.150 [2024-07-11 02:46:55.424756] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:05.150 [2024-07-11 02:46:55.424767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.424794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.424909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.424936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.425030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.425055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.425141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.425170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.425268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.425296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.425380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.425408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.425505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.425541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.425634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.425664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.425760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.425787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.425876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.425908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.426001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.426031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.426124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.426152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.426245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.426274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.426368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.426395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.426482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.426517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.426608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.426635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.426722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.426749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.426843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.426870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.150 qpair failed and we were unable to recover it. 00:41:05.150 [2024-07-11 02:46:55.426955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.150 [2024-07-11 02:46:55.426982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.427074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.427102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.427193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.427220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.427309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.427338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.427432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.427459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.427559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.427588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.427683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.427711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.427800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.427826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.427916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.427943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.428036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.428065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.428158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.428185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.428276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.428303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.428392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.428419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.428506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.428541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.428637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.428665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.428758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.428786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.428876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.428910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.429002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.429029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.429113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.429145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.429235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.429262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.429356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.429384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.429466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.429493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.429592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.429620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.429712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.429739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.429833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.429861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.429949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.429975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.430069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.430097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.430187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.430213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.430301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.430328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.430414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.430440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.430535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.430563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.430650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.430677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.430768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.430795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.430887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.430914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.431000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.431029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.431119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.431146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.431234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.431261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.431344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.431371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.431466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.431496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.431597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.431627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.431725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.431753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.431839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.431865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.431951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.431978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.432068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.432096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.432189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.432216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.432308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.432335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.432423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.432449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.432546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.432575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.432668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.432695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.432808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.432837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.432925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.432952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.433041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.433068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.433163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.433192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.433279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.433307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.433403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.433430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.433529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.433556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.433649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.433678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.433775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.433805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.433899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.433932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.434027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.434054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.434150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.151 [2024-07-11 02:46:55.434177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.151 qpair failed and we were unable to recover it. 00:41:05.151 [2024-07-11 02:46:55.434262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.434288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.434378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.434407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.434499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.434535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.434623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.434650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.434740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.434766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.434852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.434879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.434959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.434987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.435085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.435112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.435199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.435227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.435319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.435349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.435433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.435460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.435562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.435591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.435673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.435701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.435794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.435821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.435901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.435927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.436024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.436051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.436142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.436170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.436308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.436336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.436427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.436455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.436546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.436574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.436663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.436691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.436783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.436810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.436901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.436928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.437017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.437045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.437137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.437167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.437260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.437288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.437379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.437406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.437493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.437532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.437628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.437658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.437749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.437777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.437871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.437898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.437985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.438014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.438110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.438137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.438230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.438257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.438343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.438370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.438470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.438499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.438602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.438629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.438722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.438753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.438838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.438865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.438956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.438983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.439065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.439091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.439188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.439216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.439309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.439337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.439430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.439458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.439552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.439580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.439667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.439694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.439787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.439815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.439903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.439930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.440024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.440053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.440151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.440180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.440278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.440307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.440399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.440425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.440505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.440537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.440624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.440650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.152 qpair failed and we were unable to recover it. 00:41:05.152 [2024-07-11 02:46:55.440742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.152 [2024-07-11 02:46:55.440770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.440861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.440889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.440973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.440999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.441085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.441112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.441202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.441230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.441319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.441346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.441439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.441467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.441566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.441596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.441697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.441725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.441823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.441851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.441939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.441971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.442065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.442094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.442187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.442214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.442308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.442336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.442421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.442448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.442544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.442572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.442656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.442683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.442768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.442795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.442885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.442911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.443019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.443046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.443136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.443163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.443250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.443276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.443365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.443393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.443491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.443525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.443665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.443691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.443776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.443802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.443886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.443912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.444003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.444030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.444114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.444141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.444242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.444271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.444366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.444395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.444493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.444530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.444626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.444654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.444747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.444776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.444872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.444899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.444987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.445013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.445099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.445127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.445213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.445241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.445339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.445367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.445455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.445484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.445581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.445610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.445703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.445730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.445825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.445854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.445945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.445973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.446061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.446088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.446178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.446206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.446294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.446322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.446412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.446439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.446525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.446553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.446642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.446668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.446754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.446780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.446878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.446907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.446996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.447023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.447116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.447145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.447235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.447263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.447349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.447375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.447460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.447487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.447589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.447618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.447709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.447738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.447827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.447853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.153 [2024-07-11 02:46:55.447940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.153 [2024-07-11 02:46:55.447967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.153 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.448048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.448075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.448162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.448188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.448273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.448299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.448388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.448414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.448497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.448529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.448616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.448643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.448735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.448763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.448859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.448889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.448976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.449005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.449094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.449121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.449207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.449234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.449318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.449344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.449429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.449456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.449548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.449577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.449668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.449697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.449794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.449822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.449912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.449944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.450030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.450056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.450140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.450167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.450255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.450283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.450372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.450398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.450490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.450529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.450621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.450648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.450737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.450766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.450858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.450887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.450978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.451005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.451087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.451114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.451201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.451228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.451316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.451342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.451424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.451451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.451549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.451578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.451669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.451699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.451787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.451815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.451908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.451937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.452024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.452051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.452142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.452171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.452257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.452286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.452384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.452412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.452506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.452540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.452631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.452658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.452750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.452778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.452873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.452900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.452994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.453022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.453107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.453137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.453232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.453262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.453358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.453386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.453480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.453515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.453608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.453635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.453720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.453747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.453843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.453870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.453959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.453987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.454084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.454113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.454211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.454238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.454327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.454353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.454441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.454468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.454570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.454596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.454684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.454712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.454810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.454837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.454930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.454959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.455054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.455082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.154 [2024-07-11 02:46:55.455170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.154 [2024-07-11 02:46:55.455198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.154 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.455289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.455316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.455410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.455438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.455532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.455562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.455654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.455681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.455780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.455807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.455893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.455919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.456009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.456036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.456129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.456159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.456249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.456277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.456379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.456408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.456496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.456528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.456623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.456651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.456739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.456766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.456848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.456875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.456962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.456989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.457082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.457110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.457201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.457230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.457328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.457357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.457445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.457472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.457572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.457600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.457688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.457715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.457808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.457836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.457919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.457949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.458035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.458062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.458153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.458180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.458266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.458293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.458376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.458402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.458486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.458518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.458602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.458629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.458717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.458743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.458827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.458853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.458945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.458974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.459071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.459100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.459194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.459222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.459310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.459336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.459418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.459445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.459546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.459576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.459670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.459697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.459780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.459806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.459892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.459919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.460017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.460046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.460142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.460170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.460258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.460284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.460363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.460389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 EAL: No free 2048 kB hugepages reported on node 1 00:41:05.155 [2024-07-11 02:46:55.460469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.460496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.460597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.460625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.460716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.460745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.460838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.460867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.460955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.460981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.461078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.461104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.461189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.461217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.461310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.461338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.461425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.461451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.461540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.461568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.461656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.155 [2024-07-11 02:46:55.461683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.155 qpair failed and we were unable to recover it. 00:41:05.155 [2024-07-11 02:46:55.461765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.461792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.461880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.461906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.461991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.462018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.462104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.462133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.462226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.462254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.462342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.462369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.462457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.462484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.462580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.462613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.462706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.462733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.462817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.462844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.462933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.462961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.463051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.463080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.463176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.463205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.463305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.463332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.463423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.463450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.463548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.463576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.463676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.463704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.463795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.463823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.463914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.463941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.464033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.464060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.464146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.464173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.464273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.464299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.464383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.464410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.464499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.464535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.464625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.464652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.464743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.464770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.464860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.464886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.464982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.465008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.465098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.465128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.465224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.465254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.465350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.465379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.465473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.465499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.465602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.465629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.465725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.465751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.465837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.465865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.465952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.465979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.466064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.466090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.466182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.466208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.466300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.466326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.466416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.466444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.466528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.466556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.466650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.466677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.466768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.466794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.466885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.466912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.467008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.467037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.467133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.467162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.467258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.467286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.467383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.467415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.467501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.467535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.467625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.467653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.467742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.467769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.467867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.467894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.467982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.468009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.468098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.468126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.468211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.468240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.468330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.468357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.468444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.468472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.468569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.468596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.468685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.468712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.468801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.468828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.468915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.156 [2024-07-11 02:46:55.468943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.156 qpair failed and we were unable to recover it. 00:41:05.156 [2024-07-11 02:46:55.469031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.469058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.469151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.469178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.469263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.469290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.469387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.469413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.469508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.469542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.469633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.469660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.469754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.469782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.469872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.469899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.469997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.470024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.470113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.470140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.470236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.470264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.470355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.470382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.470481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.470509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.470604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.470635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.470734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.470761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.470846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.470873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.470968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.470997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.471092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.471121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.471216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.471242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.471333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.471359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.471440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.471466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.471564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.471591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.471688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.471716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.471815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.471842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.471943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.471970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.472058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.472084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.472178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.472206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.472304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.472334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.472433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.472461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.472560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.472588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.472675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.472702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.472791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.472818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.472901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.472928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.473023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.473049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.473146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.473175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.473267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.473296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.473390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.473418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.473524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.473551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.473645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.473673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.473765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.473791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.473888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.473916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.474003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.474030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.474116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.474142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.474229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.474255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.474350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.474380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.474476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.474504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.474607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.474636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.474732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.474760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.474856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.474884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.474976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.475003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.475088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.475115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.475213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.475242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.475337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.475365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.475453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.475486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.475583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.475611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.475703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.475730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.475820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.475848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.475943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.475969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.476064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.476090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.157 [2024-07-11 02:46:55.476186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.157 [2024-07-11 02:46:55.476215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.157 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.476312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.476340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.476429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.476458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.476553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.476580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.476681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.476709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.476798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.476824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.476914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.476942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.477044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.477071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.477172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.477200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.477334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.477361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.477447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.477473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.477570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.477597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.477689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.477718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.477813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.477841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.477930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.477956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.478039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.478065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.478164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.478191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.478284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.478310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.478402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.478429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.478534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.478561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.478656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.478682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.478768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.478799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.478892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.478920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.479012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.479042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.479136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.479165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.479261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.479289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.479381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.479407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.479497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.479530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.479626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.479653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.479752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.479778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.479868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.479898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.479983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.480011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.480108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.480134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.480221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.480248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.480338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.480365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.480463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.480492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.480593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.480621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.480710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.480737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.480827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.480854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.480941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.480967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.481056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.481083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.481174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.481202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.481291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.481318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.481407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.481433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.481527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.481555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.481641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.481668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.481750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.481776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.481866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.481893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.481983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.482012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.482102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.482129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.482219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.482246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.482341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.482368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.482457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.482483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.482583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.482610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.482694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.482721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.482807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.158 [2024-07-11 02:46:55.482834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.158 qpair failed and we were unable to recover it. 00:41:05.158 [2024-07-11 02:46:55.482924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.482950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.483040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.483068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.483153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.483183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.483279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.483307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.483400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.483428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.483523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.483550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.483648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.483677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.483767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.483794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.483884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.483910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.484043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.484070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.484159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.484186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.484274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.484301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.484432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.484459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.484551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.484578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.484664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.484690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.484779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.484806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.484891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.484918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.485006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.485032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.485124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.485152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.485253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.485283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.485377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.485405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.485508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.485545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.485639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.485666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.485762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.485789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.485878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.485905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.485988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.486014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.486105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.486134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.486228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.486255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.486342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.486369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.486454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.486480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.486574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.486601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.486692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.486718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.486809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.486840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.486930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.486956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.487045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.487071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.487160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.487185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.487275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.487302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.487392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.487419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.487507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.487541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.487637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.487664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.487748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.487774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.487862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.487889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.487983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.488011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.488101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.488128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.488224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.488252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.488345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.488375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.488474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.488503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.488602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.488629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.488715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.488742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.488834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.488861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.488952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.488979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.489065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.489091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.489186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.489214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.489305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.489334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.489425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.489454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.489547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.489575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.489665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.489693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.489783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.489811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.489904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.489932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.490036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.490064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.490159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.159 [2024-07-11 02:46:55.490187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.159 qpair failed and we were unable to recover it. 00:41:05.159 [2024-07-11 02:46:55.490275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.490303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.490394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.490423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.490521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.490548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.490643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.490671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.490757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.490783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.490877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.490906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.490998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.491025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.491115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.491144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.491237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.491264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.491348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.491374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.491459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.491485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.491575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.491607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.491700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.491730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.491816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.491843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.491934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.491962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.492049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.492076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.492161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.492188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.492277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.492303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.492386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.492413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.492496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.492529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.492621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.492650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.492792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.492820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.492914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.492940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.493034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.493062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.493151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.493178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.493282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.493309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.493404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.493432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.493531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.493559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.493651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.493677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.493767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.493794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.493820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:05.160 [2024-07-11 02:46:55.493889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.493915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.494013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.494042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.494139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.494168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.494263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.494291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.494379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.494405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.494492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.494527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.494613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.494640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.494747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.494774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.494872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.494900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.495037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.495066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.495158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.495185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.495278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.495304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.495391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.495417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.495505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.495537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.495641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.495668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.495763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.495789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.495881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.495907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.495997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.496026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.496121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.496149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.496237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.496263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.496353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.496379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.496471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.496503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.496602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.496630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.496719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.496745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.496841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.496868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.496956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.496984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.497080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.497109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.497199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.497227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.497324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.497350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.497442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.160 [2024-07-11 02:46:55.497470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.160 qpair failed and we were unable to recover it. 00:41:05.160 [2024-07-11 02:46:55.497568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.497595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.497697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.497726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.497820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.497849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.497936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.497963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.498046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.498072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.498166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.498192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.498276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.498302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.498390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.498416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.498519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.498547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.498643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.498669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.498769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.498795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.498887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.498914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.499011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.499038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.499137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.499170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.499261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.499289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.499389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.499418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.499519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.499547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.499641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.499667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.499766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.499796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.499889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.499916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.500001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.500028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.500123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.500149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.500234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.500261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.500348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.500375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.500471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.500499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.500615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.500642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.500742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.500772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.500871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.500899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.501011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.501041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.501133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.501160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.501249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.501278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.501368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.501395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.501498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.501539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.501630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.501656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.501748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.501776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.501876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.501903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.502017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.502044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.502135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.502161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.502255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.502283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.502378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.502407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.502502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.502537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.502631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.502658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.502752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.502779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.502875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.502901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.502987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.503013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.503112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.503141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.503236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.503265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.503366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.503395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.503500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.503534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.503632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.503659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.503750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.503777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.503870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.503898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.503992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.504019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.504114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.504142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.504231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.161 [2024-07-11 02:46:55.504258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.161 qpair failed and we were unable to recover it. 00:41:05.161 [2024-07-11 02:46:55.504350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.504379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.504471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.504497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.504590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.504616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.504717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.504749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.504839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.504866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.504957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.504983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.505074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.505102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.505190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.505217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.505303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.505330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.505416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.505443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.505536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.505564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.505649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.505675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.505760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.505786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.505877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.505903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.505998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.506025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.506115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.506143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.506230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.506258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.506364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.506394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.506491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.506529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.506625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.506652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.506742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.506769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.506859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.506885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.506975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.507002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.507096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.507122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.507214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.507240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.507333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.507359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.507452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.507480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.507574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.507601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.507693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.507719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.507802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.507828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.507926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.507960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.508050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.508077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.508170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.508196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.508293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.508320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.508421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.508448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.508540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.508567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.508657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.508684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.508782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.508813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.508909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.508937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.509028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.509056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.509146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.509174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.509258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.509284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.509371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.509398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.509488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.509522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.509618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.509646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.509742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.509769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.509855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.509882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.509974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.510004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.510099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.510128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.510239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.510268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.510364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.510392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.510484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.510517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.510616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.510643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.510729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.510756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.510852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.510879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.510971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.510997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.511092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.511120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.511218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.511248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.511343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.511370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.511464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.511492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.511590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.162 [2024-07-11 02:46:55.511618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.162 qpair failed and we were unable to recover it. 00:41:05.162 [2024-07-11 02:46:55.511706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.511732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.511829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.511856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.511952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.511980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.512065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.512092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.512180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.512207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.512296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.512323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.512404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.512430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.512531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.512560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.512655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.512684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.512776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.512802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.512895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.512922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.163 [2024-07-11 02:46:55.513012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.163 [2024-07-11 02:46:55.513039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.163 qpair failed and we were unable to recover it. 00:41:05.433 [2024-07-11 02:46:55.513131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.513161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.513259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.513288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.513381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.513408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.513493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.513532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.513627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.513656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.513751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.513780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.513872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.513900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.513994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.514020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.514110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.514138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.514230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.514257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.514349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.514377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.514470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.514498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.514596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.514624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.514723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.514750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.514841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.514869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.514962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.514989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.515079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.515107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.515201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.515228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.515320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.515348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.515437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.515464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.515559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.515586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.515675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.515702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.515785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.515811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.515899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.515925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.516015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.516045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.516134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.516161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.516247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.516273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.516362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.516388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.516479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.516507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.516606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.516634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.516724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.516750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.516834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.516861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.516953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.516983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.517079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.517108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.517201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.517227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.517319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.517347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.517436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.517463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.517567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.517595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.517692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.517720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.517810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.434 [2024-07-11 02:46:55.517838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.434 qpair failed and we were unable to recover it. 00:41:05.434 [2024-07-11 02:46:55.517930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.517956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.518046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.518075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.518166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.518195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.518286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.518314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.518404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.518430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.518531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.518559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.518655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.518684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.518780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.518808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.518899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.518926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.519016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.519043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.519130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.519156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.519249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.519277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.519371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.519399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.519487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.519523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.519617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.519644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.519739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.519766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.519852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.519879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.519969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.519996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.520084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.520112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.520199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.520226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.520322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.520351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.520438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.520465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.520565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.520592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.520682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.520709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.520801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.520827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.520924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.520951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.521043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.521072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.521164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.521192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.521285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.521311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.521395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.521422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.521518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.521546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.521637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.521664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.521756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.521785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.521877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.521904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.521994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.522021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.435 [2024-07-11 02:46:55.522133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.435 [2024-07-11 02:46:55.522161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.435 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.522249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.522276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.522376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.522406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.522502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.522537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.522628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.522656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.522748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.522775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.522875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.522903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.522995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.523023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.523117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.523145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.523239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.523268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.523404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.523431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.523523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.523551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.523638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.523665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.523768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.523801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.523892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.523920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.524012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.524042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.524138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.524171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.524267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.524295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.524388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.524415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.524508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.524542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.524636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.524663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.524796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.524823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.524914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.524941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.525037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.525065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.525151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.525177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.525267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.525295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.525386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.525415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.525519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.525548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.525641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.525671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.525769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.525796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.525895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.525922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.526012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.526039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.526129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.526157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.526251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.526280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.526371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.526400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.526496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.436 [2024-07-11 02:46:55.526539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.436 qpair failed and we were unable to recover it. 00:41:05.436 [2024-07-11 02:46:55.526634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.526661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.526747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.526774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.526859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.526886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.527021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.527049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.527134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.527160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.527256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.527284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.527374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.527402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.527500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.527538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.527630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.527657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.527756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.527785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.527880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.527907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.527997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.528024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.528121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.528150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.528248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.528276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.528370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.528399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.528486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.528518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.528658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.528684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.528772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.528799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.528893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.528921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.529013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.529040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.529139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.529171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.529264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.529291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.529385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.529415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.529508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.529544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.529636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.529666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.529757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.529785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.529875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.529901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.529991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.530018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.530110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.530139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.530236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.530262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.530357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.530385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.530478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.530505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.530599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.530626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.530715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.530741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.530849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.530878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.530967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.530994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.531085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.531114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.531205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.531232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.531328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.531357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.531449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.531477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.531576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.437 [2024-07-11 02:46:55.531603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.437 qpair failed and we were unable to recover it. 00:41:05.437 [2024-07-11 02:46:55.531702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.531728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.531819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.531846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.531947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.531976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.532071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.532099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.532194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.532224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.532320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.532348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.532435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.532468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.532569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.532598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.532687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.532714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.532811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.532840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.532937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.532965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.533057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.533085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.533175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.533201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.533317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.533345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.533427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.533453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.533547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.533575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.533661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.533687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.533773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.533800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.533885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.533911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.534000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.534029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.534126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.534156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.534247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.534274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.534368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.534394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.438 [2024-07-11 02:46:55.534488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.438 [2024-07-11 02:46:55.534520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.438 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.534613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.534640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.534724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.534751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.534847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.534876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.534967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.534994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.535094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.535120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.535214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.535242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.535342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.535371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.535461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.535489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.535585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.535613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.535711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.535740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.535835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.535864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.535963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.535992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.536082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.536109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.536201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.536228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.536311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.536338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.536427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.536454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.536548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.536577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.536668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.536696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.536787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.536815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.536907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.536935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.537019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.537046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.537136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.537163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.537254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.537285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.537374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.537401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.537492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.537527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.537626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.537654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.537743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.537770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.537859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.537888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.537985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.538014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.538154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.538182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.538269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.538297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.538385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.538412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.538504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.538538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.538637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.538664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.538752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.538779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.538874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.538903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.538996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.539024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.539125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.539154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.539247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.539277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.539374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.539403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.439 [2024-07-11 02:46:55.539497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.439 [2024-07-11 02:46:55.539530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.439 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.539631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.539658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.539749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.539776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.539865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.539892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.539982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.540011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.540102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.540131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.540227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.540257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.540350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.540377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.540470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.540498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.540603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.540637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.540734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.540762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.540856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.540883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.540970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.540996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.541085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.541114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.541203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.541230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.541323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.541350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.541444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.541472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.541574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.541602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.541708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.541735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.541828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.541855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.541947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.541978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.542066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.542094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.542195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.542224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.542318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.542346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.542435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.542464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.542571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.542599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.542701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.542728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.542819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.542846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.542944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.542971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.543064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.543091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.543181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.543207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.440 [2024-07-11 02:46:55.543296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.440 [2024-07-11 02:46:55.543323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.440 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.543421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.543448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.543538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.543567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.543668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.543694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.543791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.543820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.543913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.543940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.544033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.544061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.544154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.544182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.544276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.544303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.544434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.544461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.544571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.544599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.544688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.544714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.544811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.544838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.544924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.544950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.545039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.545065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.545156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.545183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.545274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.545301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.545390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.545417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.545506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.545539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.545633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.545660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.545756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.545786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.545882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.545909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.546003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.546031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.546139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.546166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.546260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.546292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.546387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.546416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.546515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.546542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.546634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.546661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.546752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.546779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.546870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.546897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.546988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.547014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.547103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.547132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.547223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.547252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.547350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.547378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.547475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.547503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.547604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.547632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.547721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.547748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.547840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.547867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.547963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.547990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.548094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.548120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.548205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.548232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.441 qpair failed and we were unable to recover it. 00:41:05.441 [2024-07-11 02:46:55.548325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.441 [2024-07-11 02:46:55.548355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.548454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.548483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.548691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.548718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.548812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.548840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.548930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.548963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.549057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.549086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.549181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.549208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.549300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.549326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.549415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.549442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.549533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.549560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.549651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.549678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.549769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.549797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.549888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.549915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.550009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.550036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.550123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.550149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.550234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.550261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.550350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.550376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.550475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.550500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.550604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.550631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.550726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.550756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.550851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.550879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.550972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.551000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.551092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.551119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.551210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.551237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.551329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.551358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.551455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.551482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.551585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.551614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.551703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.551730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.551830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.551858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.551943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.551969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.552062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.552090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.552182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.552213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.552305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.552334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.442 qpair failed and we were unable to recover it. 00:41:05.442 [2024-07-11 02:46:55.552432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.442 [2024-07-11 02:46:55.552460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.552548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.552575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.552676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.552703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.552906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.552935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.553023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.553050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.553159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.553187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.553276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.553302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.553390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.553416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.553520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.553548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.553633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.553660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.553751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.553777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.553862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.553889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.553981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.554008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.554102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.554133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.554229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.554257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.554348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.554377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.554468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.554494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.554600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.554626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.554711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.554739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.554830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.554857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.554948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.554975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.555070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.555099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.555190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.555218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.555315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.555345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.555444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.555471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.555569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.555598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.555690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.555716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.555803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.555831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.555919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.555945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.556042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.556072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.556160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.556187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.556276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.443 [2024-07-11 02:46:55.556302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.443 qpair failed and we were unable to recover it. 00:41:05.443 [2024-07-11 02:46:55.556388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.556414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.556519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.556548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.556642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.556671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.556757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.556784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.556878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.556905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.556997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.557025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.557136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.557165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.557265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.557292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.557381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.557407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.557501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.557542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.557629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.557656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.557743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.557769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.557858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.557885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.557976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.558002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.558093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.558120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.558209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.558235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.558320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.558346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.558432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.558461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.558562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.558591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.558686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.558714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.558817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.558844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.558938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.558966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.559056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.559083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.559173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.559202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.559289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.559317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.559412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.559439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.559526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.559553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.559652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.559681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.559770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.559797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.559893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.559922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.560126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.560153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.560252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.560278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.560368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.560395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.560492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.560530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.560622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.560649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.560736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.560764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.560855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.560882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.560967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.560992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.561081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.444 [2024-07-11 02:46:55.561109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.444 qpair failed and we were unable to recover it. 00:41:05.444 [2024-07-11 02:46:55.561205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.561233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.561327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.561358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.561452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.561480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.561577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.561606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.561693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.561720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.561818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.561846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.561940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.561967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.562066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.562093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.562186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.562213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.562307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.562335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.562425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.562453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.562553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.562583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.562679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.562707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.562798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.562824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.562918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.562944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.563035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.563061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.563147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.563174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.563264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.563291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.563375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.563400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.563487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.563519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.563621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.563650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.563751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.563780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.563871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.563899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.563984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.564011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.564106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.564137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.564231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.564258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.564353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.564381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.564471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.564498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.564597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.564627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.564719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.564746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.564838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.564865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.565002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.565029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.565116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.565143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.565239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.565269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.565358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.565388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.565480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.565507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.565609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.565636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.565722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.565748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.565836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.565862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.565952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.565978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.566064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.445 [2024-07-11 02:46:55.566092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.445 qpair failed and we were unable to recover it. 00:41:05.445 [2024-07-11 02:46:55.566182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.566209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.566305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.566334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.566434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.566464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.566610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.566640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.566732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.566762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.566854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.566881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.566975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.567002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.567102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.567130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.567222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.567249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.567340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.567369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.567463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.567490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.567587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.567614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.567708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.567734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.567823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.567850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.567945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.567974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.568061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.568089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.568177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.568204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.568288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.568314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.568402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.568429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.568530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.568562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.568654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.568686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.568772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.568799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.568886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.568913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.569012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.569039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.569137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.569166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.569260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.569288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.569380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.569407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.569501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.569541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.569632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.569661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.569751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.569778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.569865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.569891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.569979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.570005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.570097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.570124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.570214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.570242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.570342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.570371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.570472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.570501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.570609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.570637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.570731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.570759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.570853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.570880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.570969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.570998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.571093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.446 [2024-07-11 02:46:55.571121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.446 qpair failed and we were unable to recover it. 00:41:05.446 [2024-07-11 02:46:55.571214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.571243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.571335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.571363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.571448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.571475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.571577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.571606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.571697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.571724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.571859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.571885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.571979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.572007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.572102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.572130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.572232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.572261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.572356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.572383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.572479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.572507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.572614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.572641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.572734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.572763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.572853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.572879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.572972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.572999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.573092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.573118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.573218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.573247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.573340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.573368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.573455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.573482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.573580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.573613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.573701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.573731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.573824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.573851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.573941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.573968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.574062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.574089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.574180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.574209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.574305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.574333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.574425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.574453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.574545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.574572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.574666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.574695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.574795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.574823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.574960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.574986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.575074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.575100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.575197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.575224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.575319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.575345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.447 [2024-07-11 02:46:55.575433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.447 [2024-07-11 02:46:55.575461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.447 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.575563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.575593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.575691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.575718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.575811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.575838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.575936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.575962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.576059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.576088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.576175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.576203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.576291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.576319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.576408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.576434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.576532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.576562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.576650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.576679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.576775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.576801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.576937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.576969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.577061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.577090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.577186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.577216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.577307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.577335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.577428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.577455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.577553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.577581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.577677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.577704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.577798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.577827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.577923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.577950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.578040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.578068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.578155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.578181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.578273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.578301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.578396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.578425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.578526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.578554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.578654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.578681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.578768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.578794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.578877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.578904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.578994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.579022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.579115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.579142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.579236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.579267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.579361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.579391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.579487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.579523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.579614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.579641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.579731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.579757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.579846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.579872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.579959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.579988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.580077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.580106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.580203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.580232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.580326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.580353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.448 qpair failed and we were unable to recover it. 00:41:05.448 [2024-07-11 02:46:55.580439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.448 [2024-07-11 02:46:55.580466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.580565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.580593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.580685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.580714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.580809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.580837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.580926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.580952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.581043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.581071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.581155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.581181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.581266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.581292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.581385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.581413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.581506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.581539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.581638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.581665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.581759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.581790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.581895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.581922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.582013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.582043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.582134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.582162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.582258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.582285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.582375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.582402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.582498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.582533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.582627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.582654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.582750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.582777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.582869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.582895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.582986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.583014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.583152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.583178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.583270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.583296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.583384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.583412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.583503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.583535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.583669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.583695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.583792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.583821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.583918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.583947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.584054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.584083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.584177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.584205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.584294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.584321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.584413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.584440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.584542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.584571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.584672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.584699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.584789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.584817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.584919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.584946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.585043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.585071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.585164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.585195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.585286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.585313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.585405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.585431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.449 qpair failed and we were unable to recover it. 00:41:05.449 [2024-07-11 02:46:55.585537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.449 [2024-07-11 02:46:55.585565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.585638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:05.450 [2024-07-11 02:46:55.585648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.585672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:05.450 [2024-07-11 02:46:55.585677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 [2024-07-11 02:46:55.585689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.585704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:05.450 [2024-07-11 02:46:55.585716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:05.450 [2024-07-11 02:46:55.585777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.585803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.585777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:41:05.450 [2024-07-11 02:46:55.585837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:41:05.450 [2024-07-11 02:46:55.585899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.585810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:41:05.450 [2024-07-11 02:46:55.585925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.585833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:41:05.450 [2024-07-11 02:46:55.586022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.586049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.586140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.586165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.586262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.586288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.586384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.586410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.586535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.586564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.586652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.586679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.586766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.586793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.586886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.586912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.587010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.587039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.587140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.587167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.587255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.587282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.587377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.587405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.587503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.587539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.587627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.587654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.587744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.587771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.587873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.587900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.587998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.588026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.588119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.588150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.588245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.588272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.588355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.588382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.588479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.588507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.588616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.588643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.588737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.588764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.588868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.588895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.588994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.589021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.589109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.589136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.589237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.589269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.589367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.589396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.589486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.589520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.589624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.589655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.589749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.589776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.589881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.589908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.590012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.590039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.590143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.590170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.450 [2024-07-11 02:46:55.590260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.450 [2024-07-11 02:46:55.590287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.450 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.590401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.590435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 A controller has encountered a failure and is being reset. 00:41:05.451 [2024-07-11 02:46:55.590552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.590582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.590673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.590702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.590791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.590817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.590916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.590943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.591049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.591078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.591169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.591196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.591285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.591312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.591417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.591444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.591550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.591579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.591669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.591697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.591792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.591818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.591908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.591936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.592035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.592062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.592155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.592181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.592277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.592304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.592400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.592431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.592528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.592560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.592661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.592689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.592791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.592821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.592925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.592952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.593048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.593076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.593175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.593208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.593296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.593323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.593417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.593444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.593545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.593573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.593670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.593697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.593788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.593815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.593910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.593938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.594030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.594058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.594143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.594170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.594262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.594289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.594375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.594402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.594499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.594545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.594637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.594664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.594755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.594782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.594889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.594916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.595009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.595036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.595130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.595158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.595257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.595285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.451 qpair failed and we were unable to recover it. 00:41:05.451 [2024-07-11 02:46:55.595391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.451 [2024-07-11 02:46:55.595418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.595523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.595551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.595653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.595680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.595783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.595810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.595895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.595923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.596018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.596048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.596143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.596174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.596269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.596297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.596387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.596414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.596515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.596548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.596641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.596668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.596758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.596785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.596875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.596903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.597003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.597032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3334000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.597125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.597154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.597248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.597276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.597365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.597391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.597489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.597524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.597622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.597648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2266180 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.597755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.597787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.597886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.597913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.598015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.598044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.598140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.598168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.598276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.598304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f332c000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.598406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.598437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f333c000b90 with addr=10.0.0.2, port=4420 00:41:05.452 qpair failed and we were unable to recover it. 00:41:05.452 [2024-07-11 02:46:55.598579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:05.452 [2024-07-11 02:46:55.598619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2274170 with addr=10.0.0.2, port=4420 00:41:05.452 [2024-07-11 02:46:55.598638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2274170 is same with the state(5) to be set 00:41:05.452 [2024-07-11 02:46:55.598666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2274170 (9): Bad file descriptor 00:41:05.452 [2024-07-11 02:46:55.598687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:41:05.452 [2024-07-11 02:46:55.598710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:41:05.452 [2024-07-11 02:46:55.598727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:05.452 Unable to reset the controller. 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:05.452 Malloc0 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:05.452 [2024-07-11 02:46:55.754342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.452 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:05.453 [2024-07-11 02:46:55.782567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:05.453 02:46:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1980965 00:41:06.385 Controller properly reset. 00:41:11.650 Initializing NVMe Controllers 00:41:11.650 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:11.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:11.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:41:11.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:41:11.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:41:11.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:41:11.650 Initialization complete. Launching workers. 00:41:11.650 Starting thread on core 1 00:41:11.650 Starting thread on core 2 00:41:11.650 Starting thread on core 3 00:41:11.650 Starting thread on core 0 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:41:11.650 00:41:11.650 real 0m10.669s 00:41:11.650 user 0m33.524s 00:41:11.650 sys 0m8.039s 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:11.650 ************************************ 00:41:11.650 END TEST nvmf_target_disconnect_tc2 00:41:11.650 ************************************ 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:11.650 rmmod nvme_tcp 00:41:11.650 rmmod nvme_fabrics 00:41:11.650 rmmod nvme_keyring 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1981357 ']' 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1981357 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1981357 ']' 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1981357 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1981357 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1981357' 00:41:11.650 killing process with pid 1981357 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1981357 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1981357 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:11.650 02:47:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.552 02:47:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:13.552 00:41:13.552 real 0m14.920s 00:41:13.552 user 0m58.195s 00:41:13.552 sys 0m10.269s 00:41:13.552 02:47:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:13.552 02:47:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:13.552 ************************************ 00:41:13.552 END TEST nvmf_target_disconnect 00:41:13.552 ************************************ 00:41:13.552 02:47:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:41:13.552 02:47:03 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:41:13.552 02:47:03 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:13.552 02:47:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:13.810 02:47:03 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:41:13.810 00:41:13.810 real 27m22.309s 00:41:13.810 user 75m32.689s 00:41:13.810 sys 6m0.696s 00:41:13.810 02:47:03 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:13.810 02:47:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:13.810 ************************************ 00:41:13.810 END TEST nvmf_tcp 00:41:13.810 ************************************ 00:41:13.810 02:47:04 -- common/autotest_common.sh@1142 -- # return 0 00:41:13.810 02:47:04 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:41:13.810 02:47:04 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:13.810 02:47:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:13.810 02:47:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:13.810 02:47:04 -- common/autotest_common.sh@10 -- # set +x 00:41:13.810 ************************************ 00:41:13.810 START TEST spdkcli_nvmf_tcp 00:41:13.810 ************************************ 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:13.810 * Looking for test storage... 00:41:13.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:13.810 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1982258 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1982258 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1982258 ']' 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:13.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:13.811 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:13.811 [2024-07-11 02:47:04.157267] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:41:13.811 [2024-07-11 02:47:04.157360] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1982258 ] 00:41:13.811 EAL: No free 2048 kB hugepages reported on node 1 00:41:13.811 [2024-07-11 02:47:04.215736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:14.069 [2024-07-11 02:47:04.304272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.069 [2024-07-11 02:47:04.304276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:14.069 02:47:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:14.069 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:14.069 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:14.069 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:14.069 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:14.069 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:14.069 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:14.069 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:14.069 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:14.069 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:14.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:14.069 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:14.069 ' 00:41:17.346 [2024-07-11 02:47:07.027078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:17.911 [2024-07-11 02:47:08.267248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:20.430 [2024-07-11 02:47:10.578613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:22.329 [2024-07-11 02:47:12.556637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:23.707 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:23.707 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:23.707 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:23.707 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:23.707 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:23.707 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:23.708 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:23.708 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:23.708 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:23.708 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:23.708 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:23.708 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:23.708 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:23.708 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:23.965 02:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:23.965 02:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:23.965 02:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:23.965 02:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:23.965 02:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:23.965 02:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:23.965 02:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:23.965 02:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:24.223 02:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:24.481 02:47:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:24.481 02:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:24.481 02:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:24.481 02:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:24.481 02:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:24.481 02:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:24.481 02:47:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:24.481 02:47:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:24.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:24.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:24.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:24.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:24.481 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:24.481 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:24.481 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:24.481 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:24.481 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:24.481 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:24.481 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:24.481 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:24.481 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:24.481 ' 00:41:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:29.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:29.746 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:29.746 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:29.746 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:29.746 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:29.746 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:29.746 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:29.746 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:29.746 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:29.746 02:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:29.746 02:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:29.746 02:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:29.746 02:47:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1982258 00:41:29.746 02:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1982258 ']' 00:41:29.746 02:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1982258 00:41:29.746 02:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:41:29.746 02:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:29.746 02:47:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1982258 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1982258' 00:41:29.746 killing process with pid 1982258 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1982258 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1982258 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1982258 ']' 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1982258 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1982258 ']' 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1982258 00:41:29.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1982258) - No such process 00:41:29.746 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1982258 is not found' 00:41:29.746 Process with pid 1982258 is not found 00:41:29.747 02:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:29.747 02:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:29.747 02:47:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:29.747 00:41:29.747 real 0m16.131s 00:41:29.747 user 0m34.489s 00:41:29.747 sys 0m0.750s 00:41:29.747 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:29.747 02:47:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:29.747 ************************************ 00:41:29.747 END TEST spdkcli_nvmf_tcp 00:41:29.747 ************************************ 00:41:30.005 02:47:20 -- common/autotest_common.sh@1142 -- # return 0 00:41:30.005 02:47:20 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:30.005 02:47:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:30.005 02:47:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:30.005 02:47:20 -- common/autotest_common.sh@10 -- # set +x 00:41:30.005 ************************************ 00:41:30.005 START TEST nvmf_identify_passthru 00:41:30.005 ************************************ 00:41:30.005 02:47:20 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:30.005 * Looking for test storage... 00:41:30.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:30.005 02:47:20 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:30.006 02:47:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:30.006 02:47:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:30.006 02:47:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:30.006 02:47:20 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:30.006 02:47:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:30.006 02:47:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:30.006 02:47:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:30.006 02:47:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:30.006 02:47:20 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:30.006 02:47:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:30.006 02:47:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:30.006 02:47:20 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:41:30.006 02:47:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:41:31.910 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:41:31.911 Found 0000:08:00.0 (0x8086 - 0x159b) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:41:31.911 Found 0000:08:00.1 (0x8086 - 0x159b) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:41:31.911 Found net devices under 0000:08:00.0: cvl_0_0 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:41:31.911 Found net devices under 0000:08:00.1: cvl_0_1 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:31.911 02:47:21 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:31.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:31.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:41:31.911 00:41:31.911 --- 10.0.0.2 ping statistics --- 00:41:31.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.911 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:31.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:31.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:41:31.911 00:41:31.911 --- 10.0.0.1 ping statistics --- 00:41:31.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.911 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:31.911 02:47:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:31.911 02:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:31.911 02:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:41:31.911 02:47:22 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:84:00.0 00:41:31.911 02:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:41:31.911 02:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:41:31.911 02:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:41:31.911 02:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:31.911 02:47:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:31.911 EAL: No free 2048 kB hugepages reported on node 1 00:41:36.095 02:47:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:41:36.095 02:47:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:41:36.095 02:47:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:36.095 02:47:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:36.095 EAL: No free 2048 kB hugepages reported on node 1 00:41:40.285 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:40.285 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:40.285 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:40.285 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1985768 00:41:40.285 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:40.285 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:40.285 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1985768 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1985768 ']' 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:40.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:40.285 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:40.285 [2024-07-11 02:47:30.570072] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:41:40.285 [2024-07-11 02:47:30.570178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:40.285 EAL: No free 2048 kB hugepages reported on node 1 00:41:40.285 [2024-07-11 02:47:30.639017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:40.543 [2024-07-11 02:47:30.729381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:40.543 [2024-07-11 02:47:30.729442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:40.543 [2024-07-11 02:47:30.729459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:40.543 [2024-07-11 02:47:30.729473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:40.543 [2024-07-11 02:47:30.729485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:40.543 [2024-07-11 02:47:30.729557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:40.543 [2024-07-11 02:47:30.729609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:40.543 [2024-07-11 02:47:30.729687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:41:40.543 [2024-07-11 02:47:30.729720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.543 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:40.543 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:41:40.543 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:40.543 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.543 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:40.543 INFO: Log level set to 20 00:41:40.543 INFO: Requests: 00:41:40.543 { 00:41:40.543 "jsonrpc": "2.0", 00:41:40.543 "method": "nvmf_set_config", 00:41:40.543 "id": 1, 00:41:40.543 "params": { 00:41:40.543 "admin_cmd_passthru": { 00:41:40.543 "identify_ctrlr": true 00:41:40.543 } 00:41:40.543 } 00:41:40.543 } 00:41:40.543 00:41:40.543 INFO: response: 00:41:40.543 { 00:41:40.543 "jsonrpc": "2.0", 00:41:40.543 "id": 1, 00:41:40.543 "result": true 00:41:40.543 } 00:41:40.543 00:41:40.543 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:40.543 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:40.543 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.543 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:40.543 INFO: Setting log level to 20 00:41:40.543 INFO: Setting log level to 20 00:41:40.543 INFO: Log level set to 20 00:41:40.543 INFO: Log level set to 20 00:41:40.543 INFO: Requests: 00:41:40.543 { 00:41:40.543 "jsonrpc": "2.0", 00:41:40.543 "method": "framework_start_init", 00:41:40.543 "id": 1 00:41:40.543 } 00:41:40.543 00:41:40.543 INFO: Requests: 00:41:40.543 { 00:41:40.543 "jsonrpc": "2.0", 00:41:40.543 "method": "framework_start_init", 00:41:40.543 "id": 1 00:41:40.543 } 00:41:40.543 00:41:40.543 [2024-07-11 02:47:30.933721] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:40.543 INFO: response: 00:41:40.543 { 00:41:40.543 "jsonrpc": "2.0", 00:41:40.543 "id": 1, 00:41:40.543 "result": true 00:41:40.543 } 00:41:40.543 00:41:40.543 INFO: response: 00:41:40.543 { 00:41:40.543 "jsonrpc": "2.0", 00:41:40.543 "id": 1, 00:41:40.543 "result": true 00:41:40.544 } 00:41:40.544 00:41:40.544 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:40.544 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:40.544 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.544 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:40.544 INFO: Setting log level to 40 00:41:40.544 INFO: Setting log level to 40 00:41:40.544 INFO: Setting log level to 40 00:41:40.544 [2024-07-11 02:47:30.943767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:40.544 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:40.544 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:40.544 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:40.544 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:40.802 02:47:30 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:41:40.802 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.802 02:47:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:44.095 Nvme0n1 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:44.095 [2024-07-11 02:47:33.821349] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:44.095 [ 00:41:44.095 { 00:41:44.095 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:44.095 "subtype": "Discovery", 00:41:44.095 "listen_addresses": [], 00:41:44.095 "allow_any_host": true, 00:41:44.095 "hosts": [] 00:41:44.095 }, 00:41:44.095 { 00:41:44.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:44.095 "subtype": "NVMe", 00:41:44.095 "listen_addresses": [ 00:41:44.095 { 00:41:44.095 "trtype": "TCP", 00:41:44.095 "adrfam": "IPv4", 00:41:44.095 "traddr": "10.0.0.2", 00:41:44.095 "trsvcid": "4420" 00:41:44.095 } 00:41:44.095 ], 00:41:44.095 "allow_any_host": true, 00:41:44.095 "hosts": [], 00:41:44.095 "serial_number": "SPDK00000000000001", 00:41:44.095 "model_number": "SPDK bdev Controller", 00:41:44.095 "max_namespaces": 1, 00:41:44.095 "min_cntlid": 1, 00:41:44.095 "max_cntlid": 65519, 00:41:44.095 "namespaces": [ 00:41:44.095 { 00:41:44.095 "nsid": 1, 00:41:44.095 "bdev_name": "Nvme0n1", 00:41:44.095 "name": "Nvme0n1", 00:41:44.095 "nguid": "FE6D5325439F4357B3A6AEFAFFBBFF22", 00:41:44.095 "uuid": "fe6d5325-439f-4357-b3a6-aefaffbbff22" 00:41:44.095 } 00:41:44.095 ] 00:41:44.095 } 00:41:44.095 ] 00:41:44.095 02:47:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:44.095 EAL: No free 2048 kB hugepages reported on node 1 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:44.095 02:47:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:44.095 EAL: No free 2048 kB hugepages reported on node 1 00:41:44.095 02:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:44.095 02:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:41:44.095 02:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:44.095 02:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:44.095 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:44.095 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:44.095 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.095 02:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:44.095 02:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:44.095 rmmod nvme_tcp 00:41:44.095 rmmod nvme_fabrics 00:41:44.095 rmmod nvme_keyring 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1985768 ']' 00:41:44.095 02:47:34 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1985768 00:41:44.095 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1985768 ']' 00:41:44.095 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1985768 00:41:44.095 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:41:44.095 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:44.095 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1985768 00:41:44.096 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:44.096 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:44.096 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1985768' 00:41:44.096 killing process with pid 1985768 00:41:44.096 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1985768 00:41:44.096 02:47:34 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1985768 00:41:45.469 02:47:35 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:45.469 02:47:35 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:45.469 02:47:35 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:45.469 02:47:35 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:45.469 02:47:35 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:45.469 02:47:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:45.469 02:47:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:45.469 02:47:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:48.029 02:47:37 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:48.029 00:41:48.029 real 0m17.658s 00:41:48.029 user 0m26.802s 00:41:48.029 sys 0m2.055s 00:41:48.029 02:47:37 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:48.029 02:47:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:48.029 ************************************ 00:41:48.029 END TEST nvmf_identify_passthru 00:41:48.029 ************************************ 00:41:48.029 02:47:37 -- common/autotest_common.sh@1142 -- # return 0 00:41:48.029 02:47:37 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:48.029 02:47:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:48.029 02:47:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:48.029 02:47:37 -- common/autotest_common.sh@10 -- # set +x 00:41:48.029 ************************************ 00:41:48.029 START TEST nvmf_dif 00:41:48.029 ************************************ 00:41:48.029 02:47:37 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:48.029 * Looking for test storage... 00:41:48.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:48.029 02:47:37 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:48.029 02:47:37 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:48.029 02:47:37 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:48.029 02:47:37 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:48.029 02:47:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.029 02:47:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.029 02:47:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.029 02:47:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:48.029 02:47:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:48.029 02:47:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:48.029 02:47:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:48.029 02:47:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:48.029 02:47:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:48.029 02:47:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:48.029 02:47:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:48.029 02:47:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:48.029 02:47:37 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:41:48.029 02:47:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:49.461 02:47:39 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:49.461 02:47:39 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:41:49.461 02:47:39 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:49.461 02:47:39 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:49.461 02:47:39 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:49.461 02:47:39 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:49.461 02:47:39 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:49.461 02:47:39 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:41:49.462 Found 0000:08:00.0 (0x8086 - 0x159b) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:41:49.462 Found 0000:08:00.1 (0x8086 - 0x159b) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:41:49.462 Found net devices under 0000:08:00.0: cvl_0_0 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:41:49.462 Found net devices under 0000:08:00.1: cvl_0_1 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:49.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:49.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:41:49.462 00:41:49.462 --- 10.0.0.2 ping statistics --- 00:41:49.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:49.462 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:49.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:49.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:41:49.462 00:41:49.462 --- 10.0.0.1 ping statistics --- 00:41:49.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:49.462 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:41:49.462 02:47:39 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:50.403 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:41:50.403 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:50.403 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:41:50.403 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:41:50.403 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:41:50.403 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:41:50.403 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:41:50.403 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:41:50.403 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:41:50.403 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:41:50.403 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:41:50.403 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:41:50.403 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:41:50.404 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:41:50.404 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:41:50.404 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:41:50.404 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:50.404 02:47:40 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:50.404 02:47:40 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:50.404 02:47:40 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:50.404 02:47:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1988271 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:50.404 02:47:40 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1988271 00:41:50.404 02:47:40 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1988271 ']' 00:41:50.404 02:47:40 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:50.404 02:47:40 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:50.404 02:47:40 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:50.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:50.404 02:47:40 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:50.404 02:47:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:50.404 [2024-07-11 02:47:40.719470] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:41:50.404 [2024-07-11 02:47:40.719581] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:50.404 EAL: No free 2048 kB hugepages reported on node 1 00:41:50.404 [2024-07-11 02:47:40.785213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:50.663 [2024-07-11 02:47:40.873006] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:50.663 [2024-07-11 02:47:40.873064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:50.663 [2024-07-11 02:47:40.873080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:50.663 [2024-07-11 02:47:40.873093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:50.663 [2024-07-11 02:47:40.873106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:50.663 [2024-07-11 02:47:40.873135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:41:50.663 02:47:40 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:50.663 02:47:40 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:50.663 02:47:40 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:50.663 02:47:40 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:50.663 [2024-07-11 02:47:40.995642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.663 02:47:40 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:50.663 02:47:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:50.663 ************************************ 00:41:50.663 START TEST fio_dif_1_default 00:41:50.663 ************************************ 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:50.663 bdev_null0 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:50.663 [2024-07-11 02:47:41.051898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:50.663 { 00:41:50.663 "params": { 00:41:50.663 "name": "Nvme$subsystem", 00:41:50.663 "trtype": "$TEST_TRANSPORT", 00:41:50.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:50.663 "adrfam": "ipv4", 00:41:50.663 "trsvcid": "$NVMF_PORT", 00:41:50.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:50.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:50.663 "hdgst": ${hdgst:-false}, 00:41:50.663 "ddgst": ${ddgst:-false} 00:41:50.663 }, 00:41:50.663 "method": "bdev_nvme_attach_controller" 00:41:50.663 } 00:41:50.663 EOF 00:41:50.663 )") 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:50.663 "params": { 00:41:50.663 "name": "Nvme0", 00:41:50.663 "trtype": "tcp", 00:41:50.663 "traddr": "10.0.0.2", 00:41:50.663 "adrfam": "ipv4", 00:41:50.663 "trsvcid": "4420", 00:41:50.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:50.663 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:50.663 "hdgst": false, 00:41:50.663 "ddgst": false 00:41:50.663 }, 00:41:50.663 "method": "bdev_nvme_attach_controller" 00:41:50.663 }' 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:50.663 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:50.922 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:50.922 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:50.922 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:50.922 02:47:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.922 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:50.922 fio-3.35 00:41:50.922 Starting 1 thread 00:41:50.922 EAL: No free 2048 kB hugepages reported on node 1 00:42:03.115 00:42:03.115 filename0: (groupid=0, jobs=1): err= 0: pid=1988384: Thu Jul 11 02:47:51 2024 00:42:03.115 read: IOPS=190, BW=762KiB/s (780kB/s)(7632KiB/10016msec) 00:42:03.115 slat (nsec): min=7034, max=61632, avg=9195.45, stdev=3531.35 00:42:03.115 clat (usec): min=592, max=42338, avg=20968.05, stdev=20324.56 00:42:03.115 lat (usec): min=600, max=42348, avg=20977.24, stdev=20324.20 00:42:03.115 clat percentiles (usec): 00:42:03.115 | 1.00th=[ 619], 5.00th=[ 635], 10.00th=[ 644], 20.00th=[ 660], 00:42:03.115 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 2638], 60.00th=[41157], 00:42:03.115 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:03.115 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:03.115 | 99.99th=[42206] 00:42:03.115 bw ( KiB/s): min= 704, max= 768, per=99.87%, avg=761.60, stdev=16.74, samples=20 00:42:03.115 iops : min= 176, max= 192, avg=190.40, stdev= 4.19, samples=20 00:42:03.115 lat (usec) : 750=42.92%, 1000=6.97% 00:42:03.115 lat (msec) : 4=0.21%, 50=49.90% 00:42:03.115 cpu : usr=90.52%, sys=9.10%, ctx=14, majf=0, minf=259 00:42:03.115 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.115 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.115 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:03.115 00:42:03.115 Run status group 0 (all jobs): 00:42:03.115 READ: bw=762KiB/s (780kB/s), 762KiB/s-762KiB/s (780kB/s-780kB/s), io=7632KiB (7815kB), run=10016-10016msec 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 00:42:03.115 real 0m11.012s 00:42:03.115 user 0m9.888s 00:42:03.115 sys 0m1.157s 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 ************************************ 00:42:03.115 END TEST fio_dif_1_default 00:42:03.115 ************************************ 00:42:03.115 02:47:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:42:03.115 02:47:52 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:03.115 02:47:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:03.115 02:47:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 ************************************ 00:42:03.115 START TEST fio_dif_1_multi_subsystems 00:42:03.115 ************************************ 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 bdev_null0 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 [2024-07-11 02:47:52.116911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 bdev_null1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:42:03.115 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:03.116 { 00:42:03.116 "params": { 00:42:03.116 "name": "Nvme$subsystem", 00:42:03.116 "trtype": "$TEST_TRANSPORT", 00:42:03.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:03.116 "adrfam": "ipv4", 00:42:03.116 "trsvcid": "$NVMF_PORT", 00:42:03.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:03.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:03.116 "hdgst": ${hdgst:-false}, 00:42:03.116 "ddgst": ${ddgst:-false} 00:42:03.116 }, 00:42:03.116 "method": "bdev_nvme_attach_controller" 00:42:03.116 } 00:42:03.116 EOF 00:42:03.116 )") 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:03.116 { 00:42:03.116 "params": { 00:42:03.116 "name": "Nvme$subsystem", 00:42:03.116 "trtype": "$TEST_TRANSPORT", 00:42:03.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:03.116 "adrfam": "ipv4", 00:42:03.116 "trsvcid": "$NVMF_PORT", 00:42:03.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:03.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:03.116 "hdgst": ${hdgst:-false}, 00:42:03.116 "ddgst": ${ddgst:-false} 00:42:03.116 }, 00:42:03.116 "method": "bdev_nvme_attach_controller" 00:42:03.116 } 00:42:03.116 EOF 00:42:03.116 )") 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:03.116 "params": { 00:42:03.116 "name": "Nvme0", 00:42:03.116 "trtype": "tcp", 00:42:03.116 "traddr": "10.0.0.2", 00:42:03.116 "adrfam": "ipv4", 00:42:03.116 "trsvcid": "4420", 00:42:03.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:03.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:03.116 "hdgst": false, 00:42:03.116 "ddgst": false 00:42:03.116 }, 00:42:03.116 "method": "bdev_nvme_attach_controller" 00:42:03.116 },{ 00:42:03.116 "params": { 00:42:03.116 "name": "Nvme1", 00:42:03.116 "trtype": "tcp", 00:42:03.116 "traddr": "10.0.0.2", 00:42:03.116 "adrfam": "ipv4", 00:42:03.116 "trsvcid": "4420", 00:42:03.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:03.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:03.116 "hdgst": false, 00:42:03.116 "ddgst": false 00:42:03.116 }, 00:42:03.116 "method": "bdev_nvme_attach_controller" 00:42:03.116 }' 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:03.116 02:47:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:03.116 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:03.116 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:03.116 fio-3.35 00:42:03.116 Starting 2 threads 00:42:03.116 EAL: No free 2048 kB hugepages reported on node 1 00:42:13.089 00:42:13.089 filename0: (groupid=0, jobs=1): err= 0: pid=1989530: Thu Jul 11 02:48:03 2024 00:42:13.089 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:42:13.089 slat (nsec): min=7409, max=57342, avg=9335.67, stdev=3109.39 00:42:13.089 clat (usec): min=40810, max=42779, avg=41011.77, stdev=212.85 00:42:13.089 lat (usec): min=40818, max=42811, avg=41021.10, stdev=213.05 00:42:13.089 clat percentiles (usec): 00:42:13.089 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:13.089 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:13.089 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:13.089 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:13.089 | 99.99th=[42730] 00:42:13.089 bw ( KiB/s): min= 384, max= 416, per=40.14%, avg=388.80, stdev=11.72, samples=20 00:42:13.089 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:42:13.089 lat (msec) : 50=100.00% 00:42:13.089 cpu : usr=94.19%, sys=5.46%, ctx=17, majf=0, minf=159 00:42:13.089 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:13.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:13.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:13.089 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:13.089 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:13.089 filename1: (groupid=0, jobs=1): err= 0: pid=1989531: Thu Jul 11 02:48:03 2024 00:42:13.089 read: IOPS=144, BW=577KiB/s (591kB/s)(5776KiB/10013msec) 00:42:13.089 slat (nsec): min=7526, max=32347, avg=9249.16, stdev=2334.97 00:42:13.089 clat (usec): min=592, max=43752, avg=27708.50, stdev=19084.75 00:42:13.089 lat (usec): min=600, max=43785, avg=27717.75, stdev=19084.65 00:42:13.089 clat percentiles (usec): 00:42:13.089 | 1.00th=[ 619], 5.00th=[ 644], 10.00th=[ 652], 20.00th=[ 668], 00:42:13.089 | 30.00th=[ 701], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:13.089 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:13.089 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:42:13.089 | 99.99th=[43779] 00:42:13.089 bw ( KiB/s): min= 384, max= 768, per=59.48%, avg=576.00, stdev=183.97, samples=20 00:42:13.089 iops : min= 96, max= 192, avg=144.00, stdev=45.99, samples=20 00:42:13.089 lat (usec) : 750=32.20%, 1000=0.48% 00:42:13.089 lat (msec) : 2=0.55%, 50=66.76% 00:42:13.089 cpu : usr=93.65%, sys=5.99%, ctx=14, majf=0, minf=131 00:42:13.090 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:13.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:13.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:13.090 issued rwts: total=1444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:13.090 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:13.090 00:42:13.090 Run status group 0 (all jobs): 00:42:13.090 READ: bw=967KiB/s (990kB/s), 390KiB/s-577KiB/s (399kB/s-591kB/s), io=9680KiB (9912kB), run=10013-10014msec 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.090 00:42:13.090 real 0m11.273s 00:42:13.090 user 0m19.900s 00:42:13.090 sys 0m1.426s 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 ************************************ 00:42:13.090 END TEST fio_dif_1_multi_subsystems 00:42:13.090 ************************************ 00:42:13.090 02:48:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:42:13.090 02:48:03 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:13.090 02:48:03 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:13.090 02:48:03 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 ************************************ 00:42:13.090 START TEST fio_dif_rand_params 00:42:13.090 ************************************ 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 bdev_null0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:13.090 [2024-07-11 02:48:03.442561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:13.090 { 00:42:13.090 "params": { 00:42:13.090 "name": "Nvme$subsystem", 00:42:13.090 "trtype": "$TEST_TRANSPORT", 00:42:13.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:13.090 "adrfam": "ipv4", 00:42:13.090 "trsvcid": "$NVMF_PORT", 00:42:13.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:13.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:13.090 "hdgst": ${hdgst:-false}, 00:42:13.090 "ddgst": ${ddgst:-false} 00:42:13.090 }, 00:42:13.090 "method": "bdev_nvme_attach_controller" 00:42:13.090 } 00:42:13.090 EOF 00:42:13.090 )") 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:13.090 "params": { 00:42:13.090 "name": "Nvme0", 00:42:13.090 "trtype": "tcp", 00:42:13.090 "traddr": "10.0.0.2", 00:42:13.090 "adrfam": "ipv4", 00:42:13.090 "trsvcid": "4420", 00:42:13.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:13.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:13.090 "hdgst": false, 00:42:13.090 "ddgst": false 00:42:13.090 }, 00:42:13.090 "method": "bdev_nvme_attach_controller" 00:42:13.090 }' 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:13.090 02:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:13.359 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:13.359 ... 00:42:13.359 fio-3.35 00:42:13.359 Starting 3 threads 00:42:13.359 EAL: No free 2048 kB hugepages reported on node 1 00:42:19.922 00:42:19.922 filename0: (groupid=0, jobs=1): err= 0: pid=1990701: Thu Jul 11 02:48:09 2024 00:42:19.922 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(126MiB/5044msec) 00:42:19.922 slat (usec): min=7, max=128, avg=16.97, stdev= 6.69 00:42:19.922 clat (usec): min=4916, max=90492, avg=14902.52, stdev=11176.87 00:42:19.922 lat (usec): min=4928, max=90503, avg=14919.49, stdev=11176.75 00:42:19.922 clat percentiles (usec): 00:42:19.922 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 7177], 20.00th=[ 9503], 00:42:19.922 | 30.00th=[10290], 40.00th=[12387], 50.00th=[13173], 60.00th=[13698], 00:42:19.922 | 70.00th=[14091], 80.00th=[14615], 90.00th=[16057], 95.00th=[50594], 00:42:19.922 | 99.00th=[54264], 99.50th=[55837], 99.90th=[88605], 99.95th=[90702], 00:42:19.922 | 99.99th=[90702] 00:42:19.922 bw ( KiB/s): min=15616, max=33792, per=35.03%, avg=25830.40, stdev=5493.84, samples=10 00:42:19.922 iops : min= 122, max= 264, avg=201.80, stdev=42.92, samples=10 00:42:19.922 lat (msec) : 10=27.70%, 20=64.89%, 50=1.88%, 100=5.54% 00:42:19.922 cpu : usr=93.65%, sys=5.65%, ctx=119, majf=0, minf=133 00:42:19.922 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.922 issued rwts: total=1011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:19.922 filename0: (groupid=0, jobs=1): err= 0: pid=1990702: Thu Jul 11 02:48:09 2024 00:42:19.922 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(119MiB/5023msec) 00:42:19.922 slat (nsec): min=7842, max=61101, avg=13859.24, stdev=3950.73 00:42:19.922 clat (usec): min=5275, max=59112, avg=15821.98, stdev=11421.46 00:42:19.922 lat (usec): min=5286, max=59122, avg=15835.84, stdev=11421.40 00:42:19.922 clat percentiles (usec): 00:42:19.922 | 1.00th=[ 5866], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[ 9896], 00:42:19.922 | 30.00th=[10945], 40.00th=[12911], 50.00th=[13566], 60.00th=[14091], 00:42:19.922 | 70.00th=[14484], 80.00th=[15139], 90.00th=[16909], 95.00th=[52691], 00:42:19.922 | 99.00th=[55837], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:42:19.922 | 99.99th=[58983] 00:42:19.922 bw ( KiB/s): min=16929, max=30208, per=32.91%, avg=24272.10, stdev=4295.74, samples=10 00:42:19.922 iops : min= 132, max= 236, avg=189.60, stdev=33.61, samples=10 00:42:19.922 lat (msec) : 10=21.77%, 20=69.93%, 50=1.47%, 100=6.83% 00:42:19.922 cpu : usr=94.09%, sys=5.50%, ctx=12, majf=0, minf=115 00:42:19.922 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.922 issued rwts: total=951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:19.922 filename0: (groupid=0, jobs=1): err= 0: pid=1990703: Thu Jul 11 02:48:09 2024 00:42:19.922 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(118MiB/5004msec) 00:42:19.922 slat (nsec): min=7775, max=34029, avg=14108.95, stdev=3822.15 00:42:19.922 clat (usec): min=3849, max=56946, avg=15881.50, stdev=11161.40 00:42:19.922 lat (usec): min=3863, max=56958, avg=15895.61, stdev=11161.21 00:42:19.922 clat percentiles (usec): 00:42:19.922 | 1.00th=[ 5473], 5.00th=[ 6521], 10.00th=[ 9110], 20.00th=[10028], 00:42:19.922 | 30.00th=[11207], 40.00th=[13042], 50.00th=[13960], 60.00th=[14484], 00:42:19.922 | 70.00th=[14877], 80.00th=[15401], 90.00th=[17171], 95.00th=[50594], 00:42:19.922 | 99.00th=[54789], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:42:19.922 | 99.99th=[56886] 00:42:19.922 bw ( KiB/s): min=18688, max=29184, per=32.70%, avg=24115.20, stdev=3536.55, samples=10 00:42:19.922 iops : min= 146, max= 228, avg=188.40, stdev=27.63, samples=10 00:42:19.922 lat (msec) : 4=0.11%, 10=19.92%, 20=71.72%, 50=2.97%, 100=5.30% 00:42:19.922 cpu : usr=93.96%, sys=5.58%, ctx=24, majf=0, minf=72 00:42:19.922 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.922 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:19.922 00:42:19.922 Run status group 0 (all jobs): 00:42:19.922 READ: bw=72.0MiB/s (75.5MB/s), 23.6MiB/s-25.1MiB/s (24.7MB/s-26.3MB/s), io=363MiB (381MB), run=5004-5044msec 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 bdev_null0 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 [2024-07-11 02:48:09.451801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 bdev_null1 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 bdev_null2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:19.922 { 00:42:19.922 "params": { 00:42:19.922 "name": "Nvme$subsystem", 00:42:19.922 "trtype": "$TEST_TRANSPORT", 00:42:19.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:19.922 "adrfam": "ipv4", 00:42:19.922 "trsvcid": "$NVMF_PORT", 00:42:19.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:19.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:19.922 "hdgst": ${hdgst:-false}, 00:42:19.922 "ddgst": ${ddgst:-false} 00:42:19.922 }, 00:42:19.922 "method": "bdev_nvme_attach_controller" 00:42:19.922 } 00:42:19.922 EOF 00:42:19.922 )") 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:19.922 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:19.922 { 00:42:19.922 "params": { 00:42:19.922 "name": "Nvme$subsystem", 00:42:19.922 "trtype": "$TEST_TRANSPORT", 00:42:19.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:19.922 "adrfam": "ipv4", 00:42:19.922 "trsvcid": "$NVMF_PORT", 00:42:19.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:19.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:19.923 "hdgst": ${hdgst:-false}, 00:42:19.923 "ddgst": ${ddgst:-false} 00:42:19.923 }, 00:42:19.923 "method": "bdev_nvme_attach_controller" 00:42:19.923 } 00:42:19.923 EOF 00:42:19.923 )") 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:19.923 { 00:42:19.923 "params": { 00:42:19.923 "name": "Nvme$subsystem", 00:42:19.923 "trtype": "$TEST_TRANSPORT", 00:42:19.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:19.923 "adrfam": "ipv4", 00:42:19.923 "trsvcid": "$NVMF_PORT", 00:42:19.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:19.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:19.923 "hdgst": ${hdgst:-false}, 00:42:19.923 "ddgst": ${ddgst:-false} 00:42:19.923 }, 00:42:19.923 "method": "bdev_nvme_attach_controller" 00:42:19.923 } 00:42:19.923 EOF 00:42:19.923 )") 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:19.923 "params": { 00:42:19.923 "name": "Nvme0", 00:42:19.923 "trtype": "tcp", 00:42:19.923 "traddr": "10.0.0.2", 00:42:19.923 "adrfam": "ipv4", 00:42:19.923 "trsvcid": "4420", 00:42:19.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:19.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:19.923 "hdgst": false, 00:42:19.923 "ddgst": false 00:42:19.923 }, 00:42:19.923 "method": "bdev_nvme_attach_controller" 00:42:19.923 },{ 00:42:19.923 "params": { 00:42:19.923 "name": "Nvme1", 00:42:19.923 "trtype": "tcp", 00:42:19.923 "traddr": "10.0.0.2", 00:42:19.923 "adrfam": "ipv4", 00:42:19.923 "trsvcid": "4420", 00:42:19.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:19.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:19.923 "hdgst": false, 00:42:19.923 "ddgst": false 00:42:19.923 }, 00:42:19.923 "method": "bdev_nvme_attach_controller" 00:42:19.923 },{ 00:42:19.923 "params": { 00:42:19.923 "name": "Nvme2", 00:42:19.923 "trtype": "tcp", 00:42:19.923 "traddr": "10.0.0.2", 00:42:19.923 "adrfam": "ipv4", 00:42:19.923 "trsvcid": "4420", 00:42:19.923 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:19.923 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:19.923 "hdgst": false, 00:42:19.923 "ddgst": false 00:42:19.923 }, 00:42:19.923 "method": "bdev_nvme_attach_controller" 00:42:19.923 }' 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:19.923 02:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:19.923 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:19.923 ... 00:42:19.923 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:19.923 ... 00:42:19.923 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:19.923 ... 00:42:19.923 fio-3.35 00:42:19.923 Starting 24 threads 00:42:19.923 EAL: No free 2048 kB hugepages reported on node 1 00:42:32.127 00:42:32.127 filename0: (groupid=0, jobs=1): err= 0: pid=1991873: Thu Jul 11 02:48:20 2024 00:42:32.127 read: IOPS=151, BW=606KiB/s (621kB/s)(6080KiB/10030msec) 00:42:32.127 slat (usec): min=4, max=607, avg=90.13, stdev=27.48 00:42:32.127 clat (msec): min=6, max=473, avg=104.82, stdev=125.57 00:42:32.127 lat (msec): min=6, max=473, avg=104.91, stdev=125.57 00:42:32.127 clat percentiles (msec): 00:42:32.127 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.127 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.127 | 70.00th=[ 42], 80.00th=[ 243], 90.00th=[ 351], 95.00th=[ 363], 00:42:32.127 | 99.00th=[ 372], 99.50th=[ 451], 99.90th=[ 472], 99.95th=[ 472], 00:42:32.127 | 99.99th=[ 472] 00:42:32.127 bw ( KiB/s): min= 128, max= 1664, per=4.19%, avg=601.45, stdev=615.89, samples=20 00:42:32.127 iops : min= 32, max= 416, avg=150.35, stdev=153.98, samples=20 00:42:32.127 lat (msec) : 10=1.05%, 20=2.11%, 50=72.50%, 100=2.24%, 250=2.24% 00:42:32.127 lat (msec) : 500=19.87% 00:42:32.127 cpu : usr=98.66%, sys=0.93%, ctx=16, majf=0, minf=21 00:42:32.127 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:42:32.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.127 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.127 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.127 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.127 filename0: (groupid=0, jobs=1): err= 0: pid=1991874: Thu Jul 11 02:48:20 2024 00:42:32.127 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10012msec) 00:42:32.127 slat (usec): min=10, max=134, avg=42.05, stdev=23.70 00:42:32.127 clat (msec): min=38, max=406, avg=109.64, stdev=126.38 00:42:32.127 lat (msec): min=38, max=406, avg=109.68, stdev=126.38 00:42:32.127 clat percentiles (msec): 00:42:32.127 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:42:32.127 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 41], 00:42:32.127 | 70.00th=[ 44], 80.00th=[ 279], 90.00th=[ 355], 95.00th=[ 363], 00:42:32.127 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:42:32.127 | 99.99th=[ 405] 00:42:32.127 bw ( KiB/s): min= 127, max= 1664, per=3.66%, avg=525.16, stdev=564.33, samples=19 00:42:32.127 iops : min= 31, max= 416, avg=131.21, stdev=141.11, samples=19 00:42:32.127 lat (msec) : 50=73.63%, 100=3.30%, 250=2.20%, 500=20.88% 00:42:32.127 cpu : usr=97.13%, sys=1.88%, ctx=124, majf=0, minf=27 00:42:32.127 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:32.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.127 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.127 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.127 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.127 filename0: (groupid=0, jobs=1): err= 0: pid=1991875: Thu Jul 11 02:48:20 2024 00:42:32.127 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10006msec) 00:42:32.127 slat (usec): min=11, max=154, avg=99.70, stdev=16.12 00:42:32.127 clat (msec): min=17, max=536, avg=109.10, stdev=131.81 00:42:32.127 lat (msec): min=17, max=536, avg=109.20, stdev=131.81 00:42:32.127 clat percentiles (msec): 00:42:32.127 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.127 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.127 | 70.00th=[ 43], 80.00th=[ 317], 90.00th=[ 355], 95.00th=[ 368], 00:42:32.127 | 99.00th=[ 472], 99.50th=[ 531], 99.90th=[ 535], 99.95th=[ 535], 00:42:32.127 | 99.99th=[ 535] 00:42:32.127 bw ( KiB/s): min= 128, max= 1664, per=3.62%, avg=518.47, stdev=563.51, samples=19 00:42:32.127 iops : min= 32, max= 416, avg=129.58, stdev=140.90, samples=19 00:42:32.127 lat (msec) : 20=1.10%, 50=73.49%, 100=3.30%, 250=1.37%, 500=19.78% 00:42:32.127 lat (msec) : 750=0.96% 00:42:32.127 cpu : usr=96.89%, sys=1.90%, ctx=99, majf=0, minf=38 00:42:32.127 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:42:32.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.127 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.128 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.128 filename0: (groupid=0, jobs=1): err= 0: pid=1991876: Thu Jul 11 02:48:20 2024 00:42:32.128 read: IOPS=146, BW=587KiB/s (601kB/s)(5880KiB/10018msec) 00:42:32.128 slat (usec): min=5, max=144, avg=79.24, stdev=36.01 00:42:32.128 clat (msec): min=17, max=505, avg=108.44, stdev=127.45 00:42:32.128 lat (msec): min=17, max=505, avg=108.51, stdev=127.43 00:42:32.128 clat percentiles (msec): 00:42:32.128 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.128 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.128 | 70.00th=[ 45], 80.00th=[ 259], 90.00th=[ 355], 95.00th=[ 363], 00:42:32.128 | 99.00th=[ 409], 99.50th=[ 456], 99.90th=[ 506], 99.95th=[ 506], 00:42:32.128 | 99.99th=[ 506] 00:42:32.128 bw ( KiB/s): min= 126, max= 1648, per=3.66%, avg=525.42, stdev=564.85, samples=19 00:42:32.128 iops : min= 31, max= 412, avg=131.32, stdev=141.23, samples=19 00:42:32.128 lat (msec) : 20=0.95%, 50=72.11%, 100=4.08%, 250=2.31%, 500=20.41% 00:42:32.128 lat (msec) : 750=0.14% 00:42:32.128 cpu : usr=97.22%, sys=1.78%, ctx=160, majf=0, minf=40 00:42:32.128 IO depths : 1=1.1%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:42:32.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 issued rwts: total=1470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.128 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.128 filename0: (groupid=0, jobs=1): err= 0: pid=1991877: Thu Jul 11 02:48:20 2024 00:42:32.128 read: IOPS=152, BW=610KiB/s (625kB/s)(6104KiB/10005msec) 00:42:32.128 slat (usec): min=8, max=147, avg=40.53, stdev=40.83 00:42:32.128 clat (msec): min=24, max=369, avg=104.54, stdev=116.25 00:42:32.128 lat (msec): min=24, max=369, avg=104.58, stdev=116.27 00:42:32.128 clat percentiles (msec): 00:42:32.128 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 40], 00:42:32.128 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 42], 00:42:32.128 | 70.00th=[ 44], 80.00th=[ 230], 90.00th=[ 342], 95.00th=[ 355], 00:42:32.128 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 372], 99.95th=[ 372], 00:42:32.128 | 99.99th=[ 372] 00:42:32.128 bw ( KiB/s): min= 128, max= 1664, per=3.87%, avg=554.63, stdev=563.06, samples=19 00:42:32.128 iops : min= 32, max= 416, avg=138.58, stdev=140.77, samples=19 00:42:32.128 lat (msec) : 50=72.61%, 100=2.23%, 250=5.77%, 500=19.40% 00:42:32.128 cpu : usr=98.00%, sys=1.37%, ctx=39, majf=0, minf=30 00:42:32.128 IO depths : 1=5.4%, 2=11.1%, 4=23.3%, 8=53.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:42:32.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.128 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.128 filename0: (groupid=0, jobs=1): err= 0: pid=1991878: Thu Jul 11 02:48:20 2024 00:42:32.128 read: IOPS=166, BW=665KiB/s (681kB/s)(6664KiB/10025msec) 00:42:32.128 slat (usec): min=4, max=145, avg=31.26, stdev=34.94 00:42:32.128 clat (msec): min=3, max=381, avg=96.00, stdev=96.26 00:42:32.128 lat (msec): min=3, max=381, avg=96.03, stdev=96.25 00:42:32.128 clat percentiles (msec): 00:42:32.128 | 1.00th=[ 5], 5.00th=[ 28], 10.00th=[ 39], 20.00th=[ 40], 00:42:32.128 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 42], 00:42:32.128 | 70.00th=[ 45], 80.00th=[ 226], 90.00th=[ 264], 95.00th=[ 275], 00:42:32.128 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 380], 00:42:32.128 | 99.99th=[ 380] 00:42:32.128 bw ( KiB/s): min= 143, max= 1664, per=4.60%, avg=659.75, stdev=577.75, samples=20 00:42:32.128 iops : min= 35, max= 416, avg=164.90, stdev=144.47, samples=20 00:42:32.128 lat (msec) : 4=0.96%, 10=1.08%, 20=1.80%, 50=66.81%, 100=1.38% 00:42:32.128 lat (msec) : 250=14.53%, 500=13.45% 00:42:32.128 cpu : usr=97.58%, sys=1.54%, ctx=69, majf=0, minf=96 00:42:32.128 IO depths : 1=4.6%, 2=10.0%, 4=22.4%, 8=55.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:42:32.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 issued rwts: total=1666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.128 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.128 filename0: (groupid=0, jobs=1): err= 0: pid=1991879: Thu Jul 11 02:48:20 2024 00:42:32.128 read: IOPS=149, BW=600KiB/s (614kB/s)(6000KiB/10005msec) 00:42:32.128 slat (usec): min=10, max=152, avg=86.38, stdev=32.25 00:42:32.128 clat (msec): min=33, max=465, avg=105.98, stdev=116.27 00:42:32.128 lat (msec): min=33, max=465, avg=106.06, stdev=116.26 00:42:32.128 clat percentiles (msec): 00:42:32.128 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.128 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 41], 00:42:32.128 | 70.00th=[ 44], 80.00th=[ 243], 90.00th=[ 351], 95.00th=[ 363], 00:42:32.128 | 99.00th=[ 372], 99.50th=[ 401], 99.90th=[ 464], 99.95th=[ 468], 00:42:32.128 | 99.99th=[ 468] 00:42:32.128 bw ( KiB/s): min= 128, max= 1664, per=3.79%, avg=543.74, stdev=553.27, samples=19 00:42:32.128 iops : min= 32, max= 416, avg=135.84, stdev=138.35, samples=19 00:42:32.128 lat (msec) : 50=71.20%, 100=3.47%, 250=6.27%, 500=19.07% 00:42:32.128 cpu : usr=96.88%, sys=1.81%, ctx=83, majf=0, minf=48 00:42:32.128 IO depths : 1=4.9%, 2=10.9%, 4=24.2%, 8=52.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:42:32.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.128 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.128 filename0: (groupid=0, jobs=1): err= 0: pid=1991880: Thu Jul 11 02:48:20 2024 00:42:32.128 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10005msec) 00:42:32.128 slat (usec): min=7, max=151, avg=99.60, stdev=16.42 00:42:32.128 clat (msec): min=17, max=370, avg=109.06, stdev=129.26 00:42:32.128 lat (msec): min=17, max=370, avg=109.16, stdev=129.26 00:42:32.128 clat percentiles (msec): 00:42:32.128 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.128 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.128 | 70.00th=[ 43], 80.00th=[ 321], 90.00th=[ 355], 95.00th=[ 368], 00:42:32.128 | 99.00th=[ 368], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:42:32.128 | 99.99th=[ 372] 00:42:32.128 bw ( KiB/s): min= 128, max= 1664, per=3.62%, avg=518.53, stdev=563.20, samples=19 00:42:32.128 iops : min= 32, max= 416, avg=129.58, stdev=140.82, samples=19 00:42:32.128 lat (msec) : 20=1.10%, 50=73.49%, 100=3.43%, 500=21.98% 00:42:32.128 cpu : usr=97.77%, sys=1.41%, ctx=46, majf=0, minf=35 00:42:32.128 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:32.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.128 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.128 filename1: (groupid=0, jobs=1): err= 0: pid=1991881: Thu Jul 11 02:48:20 2024 00:42:32.128 read: IOPS=145, BW=581KiB/s (595kB/s)(5816KiB/10009msec) 00:42:32.128 slat (usec): min=4, max=161, avg=88.39, stdev=31.44 00:42:32.128 clat (msec): min=15, max=406, avg=109.45, stdev=129.39 00:42:32.128 lat (msec): min=15, max=406, avg=109.53, stdev=129.38 00:42:32.128 clat percentiles (msec): 00:42:32.128 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:42:32.128 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.128 | 70.00th=[ 43], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 363], 00:42:32.128 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:42:32.128 | 99.99th=[ 405] 00:42:32.128 bw ( KiB/s): min= 126, max= 1648, per=3.62%, avg=518.68, stdev=568.32, samples=19 00:42:32.128 iops : min= 31, max= 412, avg=129.63, stdev=142.10, samples=19 00:42:32.128 lat (msec) : 20=0.96%, 50=73.25%, 100=3.78%, 500=22.01% 00:42:32.128 cpu : usr=96.89%, sys=2.06%, ctx=139, majf=0, minf=35 00:42:32.128 IO depths : 1=1.9%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:42:32.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 issued rwts: total=1454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.128 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.128 filename1: (groupid=0, jobs=1): err= 0: pid=1991882: Thu Jul 11 02:48:20 2024 00:42:32.128 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10004msec) 00:42:32.128 slat (usec): min=7, max=153, avg=86.05, stdev=29.43 00:42:32.128 clat (msec): min=17, max=459, avg=109.18, stdev=129.23 00:42:32.128 lat (msec): min=17, max=459, avg=109.26, stdev=129.24 00:42:32.128 clat percentiles (msec): 00:42:32.128 | 1.00th=[ 19], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:42:32.128 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.128 | 70.00th=[ 43], 80.00th=[ 275], 90.00th=[ 359], 95.00th=[ 363], 00:42:32.128 | 99.00th=[ 430], 99.50th=[ 451], 99.90th=[ 460], 99.95th=[ 460], 00:42:32.128 | 99.99th=[ 460] 00:42:32.128 bw ( KiB/s): min= 127, max= 1664, per=3.66%, avg=525.42, stdev=580.03, samples=19 00:42:32.128 iops : min= 31, max= 416, avg=131.32, stdev=145.04, samples=19 00:42:32.128 lat (msec) : 20=1.10%, 50=73.49%, 100=2.34%, 250=2.34%, 500=20.74% 00:42:32.128 cpu : usr=96.64%, sys=1.97%, ctx=93, majf=0, minf=44 00:42:32.128 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:42:32.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.128 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.128 filename1: (groupid=0, jobs=1): err= 0: pid=1991883: Thu Jul 11 02:48:20 2024 00:42:32.128 read: IOPS=155, BW=622KiB/s (637kB/s)(6232KiB/10026msec) 00:42:32.128 slat (usec): min=8, max=147, avg=36.90, stdev=32.55 00:42:32.128 clat (msec): min=9, max=450, avg=102.67, stdev=117.82 00:42:32.128 lat (msec): min=9, max=450, avg=102.70, stdev=117.84 00:42:32.128 clat percentiles (msec): 00:42:32.128 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 40], 20.00th=[ 40], 00:42:32.128 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 41], 00:42:32.128 | 70.00th=[ 43], 80.00th=[ 243], 90.00th=[ 347], 95.00th=[ 359], 00:42:32.128 | 99.00th=[ 368], 99.50th=[ 439], 99.90th=[ 451], 99.95th=[ 451], 00:42:32.128 | 99.99th=[ 451] 00:42:32.128 bw ( KiB/s): min= 128, max= 1664, per=4.30%, avg=616.75, stdev=603.12, samples=20 00:42:32.128 iops : min= 32, max= 416, avg=154.15, stdev=150.80, samples=20 00:42:32.128 lat (msec) : 10=0.26%, 20=1.80%, 50=72.91%, 100=1.03%, 250=4.62% 00:42:32.128 lat (msec) : 500=19.38% 00:42:32.128 cpu : usr=97.77%, sys=1.52%, ctx=75, majf=0, minf=36 00:42:32.128 IO depths : 1=4.8%, 2=10.7%, 4=23.9%, 8=52.9%, 16=7.7%, 32=0.0%, >=64=0.0% 00:42:32.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.128 issued rwts: total=1558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.129 filename1: (groupid=0, jobs=1): err= 0: pid=1991884: Thu Jul 11 02:48:20 2024 00:42:32.129 read: IOPS=150, BW=601KiB/s (616kB/s)(6016KiB/10005msec) 00:42:32.129 slat (usec): min=8, max=152, avg=76.22, stdev=35.01 00:42:32.129 clat (msec): min=27, max=445, avg=105.76, stdev=117.16 00:42:32.129 lat (msec): min=27, max=445, avg=105.84, stdev=117.13 00:42:32.129 clat percentiles (msec): 00:42:32.129 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.129 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.129 | 70.00th=[ 44], 80.00th=[ 234], 90.00th=[ 347], 95.00th=[ 363], 00:42:32.129 | 99.00th=[ 368], 99.50th=[ 430], 99.90th=[ 447], 99.95th=[ 447], 00:42:32.129 | 99.99th=[ 447] 00:42:32.129 bw ( KiB/s): min= 128, max= 1664, per=3.80%, avg=545.37, stdev=557.60, samples=19 00:42:32.129 iops : min= 32, max= 416, avg=136.26, stdev=139.42, samples=19 00:42:32.129 lat (msec) : 50=73.07%, 100=1.40%, 250=6.65%, 500=18.88% 00:42:32.129 cpu : usr=98.35%, sys=1.26%, ctx=39, majf=0, minf=32 00:42:32.129 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:42:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 issued rwts: total=1504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.129 filename1: (groupid=0, jobs=1): err= 0: pid=1991885: Thu Jul 11 02:48:20 2024 00:42:32.129 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10012msec) 00:42:32.129 slat (usec): min=8, max=145, avg=45.78, stdev=33.13 00:42:32.129 clat (msec): min=27, max=394, avg=109.62, stdev=128.18 00:42:32.129 lat (msec): min=27, max=394, avg=109.66, stdev=128.20 00:42:32.129 clat percentiles (msec): 00:42:32.129 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:42:32.129 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 41], 00:42:32.129 | 70.00th=[ 43], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 363], 00:42:32.129 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:42:32.129 | 99.99th=[ 393] 00:42:32.129 bw ( KiB/s): min= 127, max= 1664, per=3.66%, avg=525.16, stdev=570.74, samples=19 00:42:32.129 iops : min= 31, max= 416, avg=131.21, stdev=142.71, samples=19 00:42:32.129 lat (msec) : 50=74.86%, 100=2.06%, 250=2.20%, 500=20.88% 00:42:32.129 cpu : usr=98.46%, sys=1.14%, ctx=16, majf=0, minf=30 00:42:32.129 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:42:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.129 filename1: (groupid=0, jobs=1): err= 0: pid=1991886: Thu Jul 11 02:48:20 2024 00:42:32.129 read: IOPS=154, BW=616KiB/s (631kB/s)(6168KiB/10012msec) 00:42:32.129 slat (usec): min=8, max=158, avg=77.08, stdev=34.85 00:42:32.129 clat (msec): min=25, max=461, avg=103.23, stdev=107.71 00:42:32.129 lat (msec): min=25, max=461, avg=103.31, stdev=107.69 00:42:32.129 clat percentiles (msec): 00:42:32.129 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.129 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.129 | 70.00th=[ 48], 80.00th=[ 230], 90.00th=[ 279], 95.00th=[ 347], 00:42:32.129 | 99.00th=[ 363], 99.50th=[ 401], 99.90th=[ 460], 99.95th=[ 464], 00:42:32.129 | 99.99th=[ 464] 00:42:32.129 bw ( KiB/s): min= 128, max= 1664, per=3.92%, avg=561.37, stdev=547.81, samples=19 00:42:32.129 iops : min= 32, max= 416, avg=140.26, stdev=136.98, samples=19 00:42:32.129 lat (msec) : 50=71.34%, 100=1.43%, 250=11.28%, 500=15.95% 00:42:32.129 cpu : usr=98.39%, sys=1.17%, ctx=22, majf=0, minf=32 00:42:32.129 IO depths : 1=4.9%, 2=10.6%, 4=23.2%, 8=53.8%, 16=7.6%, 32=0.0%, >=64=0.0% 00:42:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.129 filename1: (groupid=0, jobs=1): err= 0: pid=1991887: Thu Jul 11 02:48:20 2024 00:42:32.129 read: IOPS=146, BW=587KiB/s (601kB/s)(5880KiB/10018msec) 00:42:32.129 slat (usec): min=8, max=156, avg=39.69, stdev=23.97 00:42:32.129 clat (msec): min=17, max=460, avg=108.74, stdev=126.82 00:42:32.129 lat (msec): min=17, max=460, avg=108.78, stdev=126.83 00:42:32.129 clat percentiles (msec): 00:42:32.129 | 1.00th=[ 28], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:42:32.129 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 41], 00:42:32.129 | 70.00th=[ 44], 80.00th=[ 249], 90.00th=[ 355], 95.00th=[ 363], 00:42:32.129 | 99.00th=[ 405], 99.50th=[ 439], 99.90th=[ 460], 99.95th=[ 460], 00:42:32.129 | 99.99th=[ 460] 00:42:32.129 bw ( KiB/s): min= 128, max= 1664, per=3.66%, avg=525.37, stdev=565.21, samples=19 00:42:32.129 iops : min= 32, max= 416, avg=131.32, stdev=141.32, samples=19 00:42:32.129 lat (msec) : 20=0.95%, 50=72.65%, 100=3.54%, 250=3.27%, 500=19.59% 00:42:32.129 cpu : usr=98.51%, sys=1.07%, ctx=22, majf=0, minf=29 00:42:32.129 IO depths : 1=1.4%, 2=7.6%, 4=25.0%, 8=54.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:42:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 issued rwts: total=1470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.129 filename1: (groupid=0, jobs=1): err= 0: pid=1991888: Thu Jul 11 02:48:20 2024 00:42:32.129 read: IOPS=145, BW=581KiB/s (595kB/s)(5824KiB/10017msec) 00:42:32.129 slat (usec): min=14, max=163, avg=80.75, stdev=29.18 00:42:32.129 clat (msec): min=27, max=369, avg=109.37, stdev=127.64 00:42:32.129 lat (msec): min=27, max=369, avg=109.45, stdev=127.62 00:42:32.129 clat percentiles (msec): 00:42:32.129 | 1.00th=[ 38], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.129 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.129 | 70.00th=[ 43], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 363], 00:42:32.129 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 372], 99.95th=[ 372], 00:42:32.129 | 99.99th=[ 372] 00:42:32.129 bw ( KiB/s): min= 127, max= 1664, per=4.05%, avg=581.30, stdev=603.78, samples=20 00:42:32.129 iops : min= 31, max= 416, avg=145.25, stdev=150.98, samples=20 00:42:32.129 lat (msec) : 50=74.18%, 100=2.75%, 250=2.20%, 500=20.88% 00:42:32.129 cpu : usr=97.85%, sys=1.43%, ctx=44, majf=0, minf=35 00:42:32.129 IO depths : 1=4.3%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:42:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.129 filename2: (groupid=0, jobs=1): err= 0: pid=1991889: Thu Jul 11 02:48:20 2024 00:42:32.129 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10007msec) 00:42:32.129 slat (usec): min=7, max=156, avg=88.59, stdev=27.23 00:42:32.129 clat (msec): min=8, max=536, avg=109.31, stdev=130.85 00:42:32.129 lat (msec): min=8, max=536, avg=109.40, stdev=130.85 00:42:32.129 clat percentiles (msec): 00:42:32.129 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:42:32.129 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.129 | 70.00th=[ 43], 80.00th=[ 321], 90.00th=[ 355], 95.00th=[ 368], 00:42:32.129 | 99.00th=[ 372], 99.50th=[ 518], 99.90th=[ 535], 99.95th=[ 535], 00:42:32.129 | 99.99th=[ 535] 00:42:32.129 bw ( KiB/s): min= 112, max= 1648, per=3.62%, avg=518.47, stdev=563.91, samples=19 00:42:32.129 iops : min= 28, max= 412, avg=129.58, stdev=141.00, samples=19 00:42:32.129 lat (msec) : 10=0.14%, 20=1.10%, 50=73.49%, 100=3.16%, 250=0.96% 00:42:32.129 lat (msec) : 500=20.47%, 750=0.69% 00:42:32.129 cpu : usr=98.28%, sys=1.32%, ctx=23, majf=0, minf=35 00:42:32.129 IO depths : 1=1.2%, 2=7.4%, 4=25.0%, 8=55.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:42:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.129 filename2: (groupid=0, jobs=1): err= 0: pid=1991890: Thu Jul 11 02:48:20 2024 00:42:32.129 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10005msec) 00:42:32.129 slat (usec): min=11, max=154, avg=98.37, stdev=16.23 00:42:32.129 clat (msec): min=17, max=369, avg=109.13, stdev=128.96 00:42:32.129 lat (msec): min=17, max=369, avg=109.23, stdev=128.96 00:42:32.129 clat percentiles (msec): 00:42:32.129 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.129 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.129 | 70.00th=[ 44], 80.00th=[ 321], 90.00th=[ 355], 95.00th=[ 368], 00:42:32.129 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 372], 99.95th=[ 372], 00:42:32.129 | 99.99th=[ 372] 00:42:32.129 bw ( KiB/s): min= 128, max= 1664, per=3.62%, avg=518.53, stdev=564.44, samples=19 00:42:32.129 iops : min= 32, max= 416, avg=129.58, stdev=141.13, samples=19 00:42:32.129 lat (msec) : 20=1.10%, 50=72.39%, 100=4.26%, 250=0.27%, 500=21.98% 00:42:32.129 cpu : usr=98.46%, sys=1.12%, ctx=16, majf=0, minf=28 00:42:32.129 IO depths : 1=4.0%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:42:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.129 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.129 filename2: (groupid=0, jobs=1): err= 0: pid=1991891: Thu Jul 11 02:48:20 2024 00:42:32.129 read: IOPS=159, BW=639KiB/s (654kB/s)(6400KiB/10020msec) 00:42:32.129 slat (usec): min=8, max=157, avg=77.08, stdev=36.74 00:42:32.129 clat (msec): min=14, max=393, avg=99.54, stdev=101.12 00:42:32.129 lat (msec): min=14, max=393, avg=99.62, stdev=101.09 00:42:32.129 clat percentiles (msec): 00:42:32.129 | 1.00th=[ 15], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.129 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.129 | 70.00th=[ 49], 80.00th=[ 224], 90.00th=[ 264], 95.00th=[ 275], 00:42:32.129 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:42:32.129 | 99.99th=[ 393] 00:42:32.129 bw ( KiB/s): min= 128, max= 1664, per=4.42%, avg=633.55, stdev=583.29, samples=20 00:42:32.129 iops : min= 32, max= 416, avg=158.35, stdev=145.84, samples=20 00:42:32.129 lat (msec) : 20=1.44%, 50=69.38%, 100=1.19%, 250=11.62%, 500=16.38% 00:42:32.129 cpu : usr=98.39%, sys=1.19%, ctx=17, majf=0, minf=37 00:42:32.129 IO depths : 1=4.6%, 2=10.8%, 4=24.8%, 8=52.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:42:32.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.130 filename2: (groupid=0, jobs=1): err= 0: pid=1991892: Thu Jul 11 02:48:20 2024 00:42:32.130 read: IOPS=147, BW=588KiB/s (602kB/s)(5888KiB/10012msec) 00:42:32.130 slat (usec): min=13, max=169, avg=92.66, stdev=29.15 00:42:32.130 clat (msec): min=35, max=442, avg=108.00, stdev=124.43 00:42:32.130 lat (msec): min=35, max=442, avg=108.09, stdev=124.42 00:42:32.130 clat percentiles (msec): 00:42:32.130 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.130 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.130 | 70.00th=[ 43], 80.00th=[ 243], 90.00th=[ 355], 95.00th=[ 363], 00:42:32.130 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 443], 99.95th=[ 443], 00:42:32.130 | 99.99th=[ 443] 00:42:32.130 bw ( KiB/s): min= 127, max= 1664, per=3.71%, avg=531.89, stdev=566.18, samples=19 00:42:32.130 iops : min= 31, max= 416, avg=132.89, stdev=141.57, samples=19 00:42:32.130 lat (msec) : 50=75.00%, 100=1.09%, 250=4.35%, 500=19.57% 00:42:32.130 cpu : usr=95.61%, sys=2.48%, ctx=381, majf=0, minf=40 00:42:32.130 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:42:32.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.130 filename2: (groupid=0, jobs=1): err= 0: pid=1991893: Thu Jul 11 02:48:20 2024 00:42:32.130 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10004msec) 00:42:32.130 slat (usec): min=9, max=155, avg=78.70, stdev=38.62 00:42:32.130 clat (msec): min=13, max=406, avg=109.23, stdev=129.35 00:42:32.130 lat (msec): min=13, max=406, avg=109.31, stdev=129.36 00:42:32.130 clat percentiles (msec): 00:42:32.130 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:42:32.130 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.130 | 70.00th=[ 43], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 363], 00:42:32.130 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:42:32.130 | 99.99th=[ 405] 00:42:32.130 bw ( KiB/s): min= 127, max= 1664, per=3.62%, avg=518.68, stdev=571.63, samples=19 00:42:32.130 iops : min= 31, max= 416, avg=129.63, stdev=142.94, samples=19 00:42:32.130 lat (msec) : 20=2.20%, 50=72.39%, 100=3.43%, 500=21.98% 00:42:32.130 cpu : usr=97.72%, sys=1.48%, ctx=78, majf=0, minf=23 00:42:32.130 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:32.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.130 filename2: (groupid=0, jobs=1): err= 0: pid=1991894: Thu Jul 11 02:48:20 2024 00:42:32.130 read: IOPS=147, BW=588KiB/s (602kB/s)(5888KiB/10012msec) 00:42:32.130 slat (usec): min=11, max=114, avg=34.86, stdev=14.66 00:42:32.130 clat (msec): min=25, max=463, avg=108.51, stdev=125.11 00:42:32.130 lat (msec): min=25, max=463, avg=108.54, stdev=125.11 00:42:32.130 clat percentiles (msec): 00:42:32.130 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:42:32.130 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 41], 00:42:32.130 | 70.00th=[ 43], 80.00th=[ 241], 90.00th=[ 355], 95.00th=[ 363], 00:42:32.130 | 99.00th=[ 409], 99.50th=[ 447], 99.90th=[ 464], 99.95th=[ 464], 00:42:32.130 | 99.99th=[ 464] 00:42:32.130 bw ( KiB/s): min= 127, max= 1664, per=3.71%, avg=531.95, stdev=565.98, samples=19 00:42:32.130 iops : min= 31, max= 416, avg=132.89, stdev=141.53, samples=19 00:42:32.130 lat (msec) : 50=74.73%, 100=1.49%, 250=4.21%, 500=19.57% 00:42:32.130 cpu : usr=98.56%, sys=1.02%, ctx=29, majf=0, minf=36 00:42:32.130 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:42:32.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.130 filename2: (groupid=0, jobs=1): err= 0: pid=1991895: Thu Jul 11 02:48:20 2024 00:42:32.130 read: IOPS=145, BW=582KiB/s (596kB/s)(5824KiB/10004msec) 00:42:32.130 slat (usec): min=19, max=151, avg=92.94, stdev=22.12 00:42:32.130 clat (msec): min=12, max=495, avg=109.12, stdev=129.77 00:42:32.130 lat (msec): min=12, max=495, avg=109.21, stdev=129.77 00:42:32.130 clat percentiles (msec): 00:42:32.130 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:42:32.130 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:42:32.130 | 70.00th=[ 43], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 363], 00:42:32.130 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 498], 99.95th=[ 498], 00:42:32.130 | 99.99th=[ 498] 00:42:32.130 bw ( KiB/s): min= 127, max= 1664, per=3.62%, avg=518.68, stdev=571.63, samples=19 00:42:32.130 iops : min= 31, max= 416, avg=129.63, stdev=142.94, samples=19 00:42:32.130 lat (msec) : 20=2.20%, 50=72.25%, 100=3.57%, 250=0.14%, 500=21.84% 00:42:32.130 cpu : usr=98.38%, sys=1.20%, ctx=28, majf=0, minf=24 00:42:32.130 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:42:32.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.130 filename2: (groupid=0, jobs=1): err= 0: pid=1991896: Thu Jul 11 02:48:20 2024 00:42:32.130 read: IOPS=161, BW=645KiB/s (660kB/s)(6456KiB/10011msec) 00:42:32.130 slat (usec): min=8, max=144, avg=22.97, stdev=20.47 00:42:32.130 clat (msec): min=26, max=362, avg=99.06, stdev=93.38 00:42:32.130 lat (msec): min=26, max=362, avg=99.08, stdev=93.37 00:42:32.130 clat percentiles (msec): 00:42:32.130 | 1.00th=[ 28], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:42:32.130 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 42], 00:42:32.130 | 70.00th=[ 84], 80.00th=[ 226], 90.00th=[ 247], 95.00th=[ 271], 00:42:32.130 | 99.00th=[ 321], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:42:32.130 | 99.99th=[ 363] 00:42:32.130 bw ( KiB/s): min= 224, max= 1664, per=4.12%, avg=591.68, stdev=533.71, samples=19 00:42:32.130 iops : min= 56, max= 416, avg=147.84, stdev=133.44, samples=19 00:42:32.130 lat (msec) : 50=69.27%, 100=1.12%, 250=19.95%, 500=9.67% 00:42:32.130 cpu : usr=98.63%, sys=0.99%, ctx=15, majf=0, minf=37 00:42:32.130 IO depths : 1=4.3%, 2=9.2%, 4=20.8%, 8=57.4%, 16=8.2%, 32=0.0%, >=64=0.0% 00:42:32.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.130 issued rwts: total=1614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:32.130 00:42:32.130 Run status group 0 (all jobs): 00:42:32.130 READ: bw=14.0MiB/s (14.7MB/s), 581KiB/s-665KiB/s (595kB/s-681kB/s), io=140MiB (147MB), run=10004-10030msec 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:32.130 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.131 bdev_null0 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.131 [2024-07-11 02:48:21.162083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.131 bdev_null1 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:32.131 { 00:42:32.131 "params": { 00:42:32.131 "name": "Nvme$subsystem", 00:42:32.131 "trtype": "$TEST_TRANSPORT", 00:42:32.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:32.131 "adrfam": "ipv4", 00:42:32.131 "trsvcid": "$NVMF_PORT", 00:42:32.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:32.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:32.131 "hdgst": ${hdgst:-false}, 00:42:32.131 "ddgst": ${ddgst:-false} 00:42:32.131 }, 00:42:32.131 "method": "bdev_nvme_attach_controller" 00:42:32.131 } 00:42:32.131 EOF 00:42:32.131 )") 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:32.131 { 00:42:32.131 "params": { 00:42:32.131 "name": "Nvme$subsystem", 00:42:32.131 "trtype": "$TEST_TRANSPORT", 00:42:32.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:32.131 "adrfam": "ipv4", 00:42:32.131 "trsvcid": "$NVMF_PORT", 00:42:32.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:32.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:32.131 "hdgst": ${hdgst:-false}, 00:42:32.131 "ddgst": ${ddgst:-false} 00:42:32.131 }, 00:42:32.131 "method": "bdev_nvme_attach_controller" 00:42:32.131 } 00:42:32.131 EOF 00:42:32.131 )") 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:32.131 "params": { 00:42:32.131 "name": "Nvme0", 00:42:32.131 "trtype": "tcp", 00:42:32.131 "traddr": "10.0.0.2", 00:42:32.131 "adrfam": "ipv4", 00:42:32.131 "trsvcid": "4420", 00:42:32.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:32.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:32.131 "hdgst": false, 00:42:32.131 "ddgst": false 00:42:32.131 }, 00:42:32.131 "method": "bdev_nvme_attach_controller" 00:42:32.131 },{ 00:42:32.131 "params": { 00:42:32.131 "name": "Nvme1", 00:42:32.131 "trtype": "tcp", 00:42:32.131 "traddr": "10.0.0.2", 00:42:32.131 "adrfam": "ipv4", 00:42:32.131 "trsvcid": "4420", 00:42:32.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:32.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:32.131 "hdgst": false, 00:42:32.131 "ddgst": false 00:42:32.131 }, 00:42:32.131 "method": "bdev_nvme_attach_controller" 00:42:32.131 }' 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:32.131 02:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:32.131 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:32.131 ... 00:42:32.131 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:32.131 ... 00:42:32.131 fio-3.35 00:42:32.131 Starting 4 threads 00:42:32.131 EAL: No free 2048 kB hugepages reported on node 1 00:42:37.424 00:42:37.424 filename0: (groupid=0, jobs=1): err= 0: pid=1992939: Thu Jul 11 02:48:27 2024 00:42:37.424 read: IOPS=1673, BW=13.1MiB/s (13.7MB/s)(65.4MiB/5002msec) 00:42:37.424 slat (nsec): min=7663, max=64557, avg=20177.37, stdev=10270.71 00:42:37.424 clat (usec): min=1186, max=8352, avg=4706.26, stdev=625.49 00:42:37.424 lat (usec): min=1202, max=8378, avg=4726.43, stdev=624.99 00:42:37.424 clat percentiles (usec): 00:42:37.424 | 1.00th=[ 3163], 5.00th=[ 3982], 10.00th=[ 4228], 20.00th=[ 4490], 00:42:37.424 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:42:37.424 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5735], 00:42:37.424 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[ 8094], 99.95th=[ 8160], 00:42:37.424 | 99.99th=[ 8356] 00:42:37.424 bw ( KiB/s): min=13120, max=13696, per=24.68%, avg=13386.60, stdev=186.65, samples=10 00:42:37.424 iops : min= 1640, max= 1712, avg=1673.30, stdev=23.35, samples=10 00:42:37.424 lat (msec) : 2=0.35%, 4=4.94%, 10=94.71% 00:42:37.424 cpu : usr=95.52%, sys=4.06%, ctx=11, majf=0, minf=9 00:42:37.424 IO depths : 1=0.2%, 2=18.1%, 4=54.4%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:37.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:37.424 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:37.424 issued rwts: total=8373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:37.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:37.424 filename0: (groupid=0, jobs=1): err= 0: pid=1992940: Thu Jul 11 02:48:27 2024 00:42:37.424 read: IOPS=1716, BW=13.4MiB/s (14.1MB/s)(67.1MiB/5001msec) 00:42:37.424 slat (nsec): min=4606, max=68195, avg=15030.10, stdev=6737.23 00:42:37.424 clat (usec): min=1106, max=8071, avg=4614.70, stdev=467.44 00:42:37.424 lat (usec): min=1121, max=8082, avg=4629.73, stdev=467.90 00:42:37.424 clat percentiles (usec): 00:42:37.424 | 1.00th=[ 3359], 5.00th=[ 3916], 10.00th=[ 4047], 20.00th=[ 4293], 00:42:37.424 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:42:37.424 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5407], 00:42:37.424 | 99.00th=[ 5800], 99.50th=[ 6390], 99.90th=[ 7570], 99.95th=[ 7701], 00:42:37.424 | 99.99th=[ 8094] 00:42:37.424 bw ( KiB/s): min=13568, max=13968, per=25.31%, avg=13729.10, stdev=113.41, samples=10 00:42:37.424 iops : min= 1696, max= 1746, avg=1716.10, stdev=14.21, samples=10 00:42:37.424 lat (msec) : 2=0.06%, 4=8.88%, 10=91.06% 00:42:37.424 cpu : usr=95.30%, sys=3.74%, ctx=200, majf=0, minf=10 00:42:37.424 IO depths : 1=0.2%, 2=9.2%, 4=60.8%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:37.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:37.424 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:37.424 issued rwts: total=8584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:37.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:37.424 filename1: (groupid=0, jobs=1): err= 0: pid=1992941: Thu Jul 11 02:48:27 2024 00:42:37.424 read: IOPS=1700, BW=13.3MiB/s (13.9MB/s)(66.4MiB/5001msec) 00:42:37.424 slat (nsec): min=4632, max=73291, avg=22186.75, stdev=11915.84 00:42:37.424 clat (usec): min=886, max=8150, avg=4620.88, stdev=525.70 00:42:37.424 lat (usec): min=909, max=8171, avg=4643.07, stdev=525.84 00:42:37.424 clat percentiles (usec): 00:42:37.424 | 1.00th=[ 3064], 5.00th=[ 3916], 10.00th=[ 4113], 20.00th=[ 4424], 00:42:37.424 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4621], 00:42:37.424 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5407], 00:42:37.424 | 99.00th=[ 6587], 99.50th=[ 7111], 99.90th=[ 7701], 99.95th=[ 7898], 00:42:37.424 | 99.99th=[ 8160] 00:42:37.424 bw ( KiB/s): min=13184, max=13824, per=25.06%, avg=13591.11, stdev=217.20, samples=9 00:42:37.424 iops : min= 1648, max= 1728, avg=1698.89, stdev=27.15, samples=9 00:42:37.424 lat (usec) : 1000=0.01% 00:42:37.424 lat (msec) : 2=0.28%, 4=6.10%, 10=93.60% 00:42:37.424 cpu : usr=92.24%, sys=5.42%, ctx=246, majf=0, minf=9 00:42:37.424 IO depths : 1=0.1%, 2=18.3%, 4=53.7%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:37.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:37.424 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:37.424 issued rwts: total=8502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:37.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:37.424 filename1: (groupid=0, jobs=1): err= 0: pid=1992942: Thu Jul 11 02:48:27 2024 00:42:37.424 read: IOPS=1690, BW=13.2MiB/s (13.8MB/s)(66.1MiB/5003msec) 00:42:37.424 slat (nsec): min=4602, max=70694, avg=21111.23, stdev=11829.71 00:42:37.424 clat (usec): min=942, max=9411, avg=4655.97, stdev=573.56 00:42:37.424 lat (usec): min=957, max=9438, avg=4677.08, stdev=573.55 00:42:37.424 clat percentiles (usec): 00:42:37.424 | 1.00th=[ 3130], 5.00th=[ 3916], 10.00th=[ 4228], 20.00th=[ 4490], 00:42:37.424 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:42:37.424 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 5473], 00:42:37.424 | 99.00th=[ 7177], 99.50th=[ 7701], 99.90th=[ 8455], 99.95th=[ 8586], 00:42:37.424 | 99.99th=[ 9372] 00:42:37.424 bw ( KiB/s): min=13056, max=13712, per=24.93%, avg=13521.60, stdev=194.57, samples=10 00:42:37.424 iops : min= 1632, max= 1714, avg=1690.20, stdev=24.32, samples=10 00:42:37.424 lat (usec) : 1000=0.01% 00:42:37.424 lat (msec) : 2=0.15%, 4=6.44%, 10=93.39% 00:42:37.424 cpu : usr=96.04%, sys=3.32%, ctx=38, majf=0, minf=9 00:42:37.424 IO depths : 1=0.1%, 2=17.7%, 4=54.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:37.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:37.424 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:37.424 issued rwts: total=8458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:37.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:37.424 00:42:37.424 Run status group 0 (all jobs): 00:42:37.424 READ: bw=53.0MiB/s (55.5MB/s), 13.1MiB/s-13.4MiB/s (13.7MB/s-14.1MB/s), io=265MiB (278MB), run=5001-5003msec 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.424 00:42:37.424 real 0m24.152s 00:42:37.424 user 4m32.243s 00:42:37.424 sys 0m6.019s 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:37.424 02:48:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:37.424 ************************************ 00:42:37.425 END TEST fio_dif_rand_params 00:42:37.425 ************************************ 00:42:37.425 02:48:27 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:42:37.425 02:48:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:37.425 02:48:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:37.425 02:48:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:37.425 02:48:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:37.425 ************************************ 00:42:37.425 START TEST fio_dif_digest 00:42:37.425 ************************************ 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:37.425 bdev_null0 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:37.425 [2024-07-11 02:48:27.642795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:37.425 { 00:42:37.425 "params": { 00:42:37.425 "name": "Nvme$subsystem", 00:42:37.425 "trtype": "$TEST_TRANSPORT", 00:42:37.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:37.425 "adrfam": "ipv4", 00:42:37.425 "trsvcid": "$NVMF_PORT", 00:42:37.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:37.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:37.425 "hdgst": ${hdgst:-false}, 00:42:37.425 "ddgst": ${ddgst:-false} 00:42:37.425 }, 00:42:37.425 "method": "bdev_nvme_attach_controller" 00:42:37.425 } 00:42:37.425 EOF 00:42:37.425 )") 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:37.425 "params": { 00:42:37.425 "name": "Nvme0", 00:42:37.425 "trtype": "tcp", 00:42:37.425 "traddr": "10.0.0.2", 00:42:37.425 "adrfam": "ipv4", 00:42:37.425 "trsvcid": "4420", 00:42:37.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:37.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:37.425 "hdgst": true, 00:42:37.425 "ddgst": true 00:42:37.425 }, 00:42:37.425 "method": "bdev_nvme_attach_controller" 00:42:37.425 }' 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:37.425 02:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:37.693 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:37.693 ... 00:42:37.693 fio-3.35 00:42:37.693 Starting 3 threads 00:42:37.693 EAL: No free 2048 kB hugepages reported on node 1 00:42:49.897 00:42:49.897 filename0: (groupid=0, jobs=1): err= 0: pid=1993519: Thu Jul 11 02:48:38 2024 00:42:49.897 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(220MiB/10046msec) 00:42:49.897 slat (nsec): min=8051, max=34469, avg=15296.04, stdev=3931.17 00:42:49.897 clat (usec): min=12831, max=59063, avg=17080.57, stdev=2509.26 00:42:49.897 lat (usec): min=12843, max=59084, avg=17095.86, stdev=2509.34 00:42:49.897 clat percentiles (usec): 00:42:49.897 | 1.00th=[14091], 5.00th=[14877], 10.00th=[15270], 20.00th=[15795], 00:42:49.898 | 30.00th=[16188], 40.00th=[16581], 50.00th=[16712], 60.00th=[17171], 00:42:49.898 | 70.00th=[17433], 80.00th=[18220], 90.00th=[19006], 95.00th=[19792], 00:42:49.898 | 99.00th=[20841], 99.50th=[22152], 99.90th=[57934], 99.95th=[58983], 00:42:49.898 | 99.99th=[58983] 00:42:49.898 bw ( KiB/s): min=19968, max=23552, per=32.98%, avg=22491.85, stdev=1136.75, samples=20 00:42:49.898 iops : min= 156, max= 184, avg=175.70, stdev= 8.88, samples=20 00:42:49.898 lat (msec) : 20=96.25%, 50=3.52%, 100=0.23% 00:42:49.898 cpu : usr=95.11%, sys=4.47%, ctx=23, majf=0, minf=134 00:42:49.898 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:49.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.898 issued rwts: total=1760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:49.898 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:49.898 filename0: (groupid=0, jobs=1): err= 0: pid=1993520: Thu Jul 11 02:48:38 2024 00:42:49.898 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10049msec) 00:42:49.898 slat (nsec): min=7991, max=68534, avg=15443.05, stdev=4571.49 00:42:49.898 clat (usec): min=10410, max=51600, avg=16174.12, stdev=1692.61 00:42:49.898 lat (usec): min=10429, max=51619, avg=16189.56, stdev=1692.47 00:42:49.898 clat percentiles (usec): 00:42:49.898 | 1.00th=[13304], 5.00th=[14353], 10.00th=[14615], 20.00th=[15139], 00:42:49.898 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16057], 60.00th=[16319], 00:42:49.898 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17695], 95.00th=[18744], 00:42:49.898 | 99.00th=[19792], 99.50th=[20055], 99.90th=[48497], 99.95th=[51643], 00:42:49.898 | 99.99th=[51643] 00:42:49.898 bw ( KiB/s): min=21248, max=25344, per=34.83%, avg=23756.80, stdev=1084.84, samples=20 00:42:49.898 iops : min= 166, max= 198, avg=185.60, stdev= 8.48, samples=20 00:42:49.898 lat (msec) : 20=99.41%, 50=0.54%, 100=0.05% 00:42:49.898 cpu : usr=94.87%, sys=4.71%, ctx=29, majf=0, minf=188 00:42:49.898 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:49.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.898 issued rwts: total=1859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:49.898 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:49.898 filename0: (groupid=0, jobs=1): err= 0: pid=1993521: Thu Jul 11 02:48:38 2024 00:42:49.898 read: IOPS=172, BW=21.6MiB/s (22.6MB/s)(217MiB/10043msec) 00:42:49.898 slat (nsec): min=9270, max=94345, avg=22895.42, stdev=4665.48 00:42:49.898 clat (usec): min=10434, max=57185, avg=17313.71, stdev=2076.85 00:42:49.898 lat (usec): min=10455, max=57211, avg=17336.61, stdev=2077.10 00:42:49.898 clat percentiles (usec): 00:42:49.898 | 1.00th=[14353], 5.00th=[15270], 10.00th=[15664], 20.00th=[16188], 00:42:49.898 | 30.00th=[16450], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:42:49.898 | 70.00th=[17695], 80.00th=[18220], 90.00th=[19530], 95.00th=[20841], 00:42:49.898 | 99.00th=[22414], 99.50th=[23200], 99.90th=[51119], 99.95th=[57410], 00:42:49.898 | 99.99th=[57410] 00:42:49.898 bw ( KiB/s): min=18432, max=23808, per=32.53%, avg=22184.55, stdev=1515.30, samples=20 00:42:49.898 iops : min= 144, max= 186, avg=173.30, stdev=11.85, samples=20 00:42:49.898 lat (msec) : 20=91.47%, 50=8.41%, 100=0.12% 00:42:49.898 cpu : usr=95.03%, sys=4.42%, ctx=17, majf=0, minf=160 00:42:49.898 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:49.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.898 issued rwts: total=1735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:49.898 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:49.898 00:42:49.898 Run status group 0 (all jobs): 00:42:49.898 READ: bw=66.6MiB/s (69.8MB/s), 21.6MiB/s-23.1MiB/s (22.6MB/s-24.2MB/s), io=669MiB (702MB), run=10043-10049msec 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.898 00:42:49.898 real 0m11.093s 00:42:49.898 user 0m29.533s 00:42:49.898 sys 0m1.607s 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:49.898 02:48:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:49.898 ************************************ 00:42:49.898 END TEST fio_dif_digest 00:42:49.898 ************************************ 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:42:49.898 02:48:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:49.898 02:48:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:49.898 rmmod nvme_tcp 00:42:49.898 rmmod nvme_fabrics 00:42:49.898 rmmod nvme_keyring 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1988271 ']' 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1988271 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1988271 ']' 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1988271 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1988271 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1988271' 00:42:49.898 killing process with pid 1988271 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1988271 00:42:49.898 02:48:38 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1988271 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:42:49.898 02:48:38 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:49.898 Waiting for block devices as requested 00:42:49.898 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:42:49.898 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:42:49.898 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:42:49.898 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:42:49.898 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:42:49.898 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:42:49.898 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:42:50.156 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:42:50.156 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:42:50.156 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:42:50.156 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:42:50.414 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:42:50.414 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:42:50.414 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:42:50.414 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:42:50.672 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:42:50.672 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:42:50.672 02:48:40 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:50.672 02:48:40 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:50.672 02:48:40 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:50.672 02:48:40 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:50.672 02:48:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:50.672 02:48:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:50.672 02:48:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:53.201 02:48:43 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:53.201 00:42:53.201 real 1m5.102s 00:42:53.201 user 6m27.712s 00:42:53.201 sys 0m16.175s 00:42:53.201 02:48:43 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:53.201 02:48:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:53.201 ************************************ 00:42:53.201 END TEST nvmf_dif 00:42:53.201 ************************************ 00:42:53.201 02:48:43 -- common/autotest_common.sh@1142 -- # return 0 00:42:53.201 02:48:43 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:53.201 02:48:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:53.201 02:48:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:53.201 02:48:43 -- common/autotest_common.sh@10 -- # set +x 00:42:53.201 ************************************ 00:42:53.201 START TEST nvmf_abort_qd_sizes 00:42:53.201 ************************************ 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:53.201 * Looking for test storage... 00:42:53.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:53.201 02:48:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:42:53.202 02:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:42:54.575 Found 0000:08:00.0 (0x8086 - 0x159b) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:42:54.575 Found 0000:08:00.1 (0x8086 - 0x159b) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:42:54.575 Found net devices under 0000:08:00.0: cvl_0_0 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:42:54.575 Found net devices under 0000:08:00.1: cvl_0_1 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:54.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:54.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:42:54.575 00:42:54.575 --- 10.0.0.2 ping statistics --- 00:42:54.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:54.575 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:54.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:54.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:42:54.575 00:42:54.575 --- 10.0.0.1 ping statistics --- 00:42:54.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:54.575 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:42:54.575 02:48:44 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:55.512 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:42:55.512 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:42:55.512 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:42:55.512 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:42:55.512 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:42:55.512 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:42:55.512 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:42:55.512 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:42:55.512 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:42:55.512 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:42:55.512 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:42:55.512 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:42:55.512 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:42:55.512 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:42:55.512 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:42:55.512 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:42:56.448 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:56.448 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:56.449 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1997228 00:42:56.449 02:48:46 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1997228 00:42:56.449 02:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1997228 ']' 00:42:56.449 02:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:56.449 02:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:56.449 02:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:56.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:56.449 02:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:56.449 02:48:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:56.449 [2024-07-11 02:48:46.858312] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:42:56.449 [2024-07-11 02:48:46.858413] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:56.706 EAL: No free 2048 kB hugepages reported on node 1 00:42:56.706 [2024-07-11 02:48:46.923596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:56.706 [2024-07-11 02:48:47.012535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:56.706 [2024-07-11 02:48:47.012596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:56.706 [2024-07-11 02:48:47.012613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:56.706 [2024-07-11 02:48:47.012627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:56.706 [2024-07-11 02:48:47.012639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:56.706 [2024-07-11 02:48:47.012718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:56.706 [2024-07-11 02:48:47.012773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:56.706 [2024-07-11 02:48:47.012821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:42:56.706 [2024-07-11 02:48:47.012823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:56.706 02:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:56.706 02:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:42:56.706 02:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:56.706 02:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:42:56.706 02:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:56.964 02:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:56.964 ************************************ 00:42:56.964 START TEST spdk_target_abort 00:42:56.964 ************************************ 00:42:56.964 02:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:42:56.964 02:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:56.964 02:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:42:56.964 02:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:56.964 02:48:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:00.244 spdk_targetn1 00:43:00.244 02:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:00.244 02:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:00.244 02:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:00.244 02:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:00.244 [2024-07-11 02:48:49.992376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:00.244 02:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:00.244 02:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:00.244 02:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:00.244 02:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:00.244 [2024-07-11 02:48:50.024629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:00.244 02:48:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:00.244 EAL: No free 2048 kB hugepages reported on node 1 00:43:03.523 Initializing NVMe Controllers 00:43:03.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:03.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:03.523 Initialization complete. Launching workers. 00:43:03.523 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10575, failed: 0 00:43:03.523 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1166, failed to submit 9409 00:43:03.523 success 722, unsuccess 444, failed 0 00:43:03.523 02:48:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:03.524 02:48:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:03.524 EAL: No free 2048 kB hugepages reported on node 1 00:43:06.824 Initializing NVMe Controllers 00:43:06.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:06.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:06.824 Initialization complete. Launching workers. 00:43:06.824 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8510, failed: 0 00:43:06.824 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1273, failed to submit 7237 00:43:06.824 success 337, unsuccess 936, failed 0 00:43:06.824 02:48:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:06.824 02:48:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:06.824 EAL: No free 2048 kB hugepages reported on node 1 00:43:10.097 Initializing NVMe Controllers 00:43:10.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:10.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:10.097 Initialization complete. Launching workers. 00:43:10.097 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29823, failed: 0 00:43:10.097 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2631, failed to submit 27192 00:43:10.097 success 457, unsuccess 2174, failed 0 00:43:10.097 02:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:10.097 02:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:10.097 02:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:10.097 02:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:10.097 02:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:10.097 02:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:10.097 02:48:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1997228 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1997228 ']' 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1997228 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1997228 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1997228' 00:43:11.030 killing process with pid 1997228 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1997228 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1997228 00:43:11.030 00:43:11.030 real 0m14.161s 00:43:11.030 user 0m53.888s 00:43:11.030 sys 0m2.322s 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:11.030 ************************************ 00:43:11.030 END TEST spdk_target_abort 00:43:11.030 ************************************ 00:43:11.030 02:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:43:11.030 02:49:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:11.030 02:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:11.030 02:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:11.030 02:49:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:11.030 ************************************ 00:43:11.030 START TEST kernel_target_abort 00:43:11.030 ************************************ 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:11.030 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:11.031 02:49:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:11.966 Waiting for block devices as requested 00:43:11.966 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:43:12.224 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:43:12.224 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:43:12.224 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:43:12.224 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:43:12.482 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:43:12.482 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:43:12.482 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:43:12.482 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:43:12.740 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:43:12.740 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:43:12.740 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:43:13.013 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:43:13.013 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:43:13.013 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:43:13.013 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:43:13.293 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:13.293 No valid GPT data, bailing 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:13.293 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:43:13.550 00:43:13.550 Discovery Log Number of Records 2, Generation counter 2 00:43:13.550 =====Discovery Log Entry 0====== 00:43:13.551 trtype: tcp 00:43:13.551 adrfam: ipv4 00:43:13.551 subtype: current discovery subsystem 00:43:13.551 treq: not specified, sq flow control disable supported 00:43:13.551 portid: 1 00:43:13.551 trsvcid: 4420 00:43:13.551 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:13.551 traddr: 10.0.0.1 00:43:13.551 eflags: none 00:43:13.551 sectype: none 00:43:13.551 =====Discovery Log Entry 1====== 00:43:13.551 trtype: tcp 00:43:13.551 adrfam: ipv4 00:43:13.551 subtype: nvme subsystem 00:43:13.551 treq: not specified, sq flow control disable supported 00:43:13.551 portid: 1 00:43:13.551 trsvcid: 4420 00:43:13.551 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:13.551 traddr: 10.0.0.1 00:43:13.551 eflags: none 00:43:13.551 sectype: none 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:13.551 02:49:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:13.551 EAL: No free 2048 kB hugepages reported on node 1 00:43:16.832 Initializing NVMe Controllers 00:43:16.832 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:16.832 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:16.832 Initialization complete. Launching workers. 00:43:16.832 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44313, failed: 0 00:43:16.832 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 44313, failed to submit 0 00:43:16.832 success 0, unsuccess 44313, failed 0 00:43:16.832 02:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:16.832 02:49:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:16.832 EAL: No free 2048 kB hugepages reported on node 1 00:43:20.113 Initializing NVMe Controllers 00:43:20.113 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:20.113 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:20.113 Initialization complete. Launching workers. 00:43:20.113 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78400, failed: 0 00:43:20.113 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19750, failed to submit 58650 00:43:20.113 success 0, unsuccess 19750, failed 0 00:43:20.113 02:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:20.113 02:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:20.113 EAL: No free 2048 kB hugepages reported on node 1 00:43:22.640 Initializing NVMe Controllers 00:43:22.640 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:22.640 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:22.640 Initialization complete. Launching workers. 00:43:22.640 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75959, failed: 0 00:43:22.640 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18974, failed to submit 56985 00:43:22.640 success 0, unsuccess 18974, failed 0 00:43:22.640 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:22.640 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:22.640 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:43:22.898 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:22.898 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:22.898 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:22.898 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:22.898 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:43:22.898 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:43:22.898 02:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:23.832 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:43:23.832 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:43:23.832 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:43:23.832 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:43:23.832 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:43:23.832 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:43:23.832 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:43:23.832 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:43:23.832 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:43:23.832 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:43:23.832 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:43:24.091 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:43:24.091 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:43:24.091 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:43:24.091 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:43:24.091 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:43:25.029 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:43:25.029 00:43:25.029 real 0m13.848s 00:43:25.029 user 0m6.705s 00:43:25.029 sys 0m2.795s 00:43:25.029 02:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:25.029 02:49:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:25.029 ************************************ 00:43:25.029 END TEST kernel_target_abort 00:43:25.029 ************************************ 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:25.029 rmmod nvme_tcp 00:43:25.029 rmmod nvme_fabrics 00:43:25.029 rmmod nvme_keyring 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1997228 ']' 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1997228 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1997228 ']' 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1997228 00:43:25.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1997228) - No such process 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1997228 is not found' 00:43:25.029 Process with pid 1997228 is not found 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:43:25.029 02:49:15 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:25.965 Waiting for block devices as requested 00:43:25.965 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:43:26.224 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:43:26.224 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:43:26.224 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:43:26.224 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:43:26.482 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:43:26.482 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:43:26.482 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:43:26.482 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:43:26.741 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:43:26.741 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:43:26.741 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:43:27.000 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:43:27.000 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:43:27.000 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:43:27.000 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:43:27.258 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:43:27.258 02:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:27.258 02:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:27.258 02:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:27.258 02:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:27.258 02:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:27.258 02:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:27.258 02:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:29.792 02:49:19 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:29.792 00:43:29.792 real 0m36.519s 00:43:29.792 user 1m2.465s 00:43:29.792 sys 0m8.013s 00:43:29.792 02:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:29.792 02:49:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:29.792 ************************************ 00:43:29.792 END TEST nvmf_abort_qd_sizes 00:43:29.792 ************************************ 00:43:29.792 02:49:19 -- common/autotest_common.sh@1142 -- # return 0 00:43:29.792 02:49:19 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:29.792 02:49:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:29.792 02:49:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:29.792 02:49:19 -- common/autotest_common.sh@10 -- # set +x 00:43:29.792 ************************************ 00:43:29.792 START TEST keyring_file 00:43:29.792 ************************************ 00:43:29.792 02:49:19 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:29.792 * Looking for test storage... 00:43:29.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:29.792 02:49:19 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:29.792 02:49:19 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:29.792 02:49:19 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:29.792 02:49:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:29.792 02:49:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:29.792 02:49:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:29.792 02:49:19 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:29.792 02:49:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@47 -- # : 0 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ULKnwz1GQe 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ULKnwz1GQe 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ULKnwz1GQe 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ULKnwz1GQe 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.t4DpgJh36R 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:43:29.792 02:49:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.t4DpgJh36R 00:43:29.792 02:49:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.t4DpgJh36R 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.t4DpgJh36R 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=2001716 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:29.792 02:49:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2001716 00:43:29.792 02:49:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2001716 ']' 00:43:29.792 02:49:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:29.792 02:49:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:29.792 02:49:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:29.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:29.792 02:49:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:29.792 02:49:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:29.792 [2024-07-11 02:49:19.888337] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:43:29.793 [2024-07-11 02:49:19.888430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001716 ] 00:43:29.793 EAL: No free 2048 kB hugepages reported on node 1 00:43:29.793 [2024-07-11 02:49:19.963599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:29.793 [2024-07-11 02:49:20.056005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:43:30.051 02:49:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:30.051 [2024-07-11 02:49:20.275535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:30.051 null0 00:43:30.051 [2024-07-11 02:49:20.307572] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:30.051 [2024-07-11 02:49:20.307930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:30.051 [2024-07-11 02:49:20.315586] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:30.051 02:49:20 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:30.051 [2024-07-11 02:49:20.327605] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:30.051 request: 00:43:30.051 { 00:43:30.051 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:30.051 "secure_channel": false, 00:43:30.051 "listen_address": { 00:43:30.051 "trtype": "tcp", 00:43:30.051 "traddr": "127.0.0.1", 00:43:30.051 "trsvcid": "4420" 00:43:30.051 }, 00:43:30.051 "method": "nvmf_subsystem_add_listener", 00:43:30.051 "req_id": 1 00:43:30.051 } 00:43:30.051 Got JSON-RPC error response 00:43:30.051 response: 00:43:30.051 { 00:43:30.051 "code": -32602, 00:43:30.051 "message": "Invalid parameters" 00:43:30.051 } 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:30.051 02:49:20 keyring_file -- keyring/file.sh@46 -- # bperfpid=2001732 00:43:30.051 02:49:20 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:30.051 02:49:20 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2001732 /var/tmp/bperf.sock 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2001732 ']' 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:30.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:30.051 02:49:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:30.051 [2024-07-11 02:49:20.378316] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:43:30.051 [2024-07-11 02:49:20.378411] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001732 ] 00:43:30.051 EAL: No free 2048 kB hugepages reported on node 1 00:43:30.051 [2024-07-11 02:49:20.437540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:30.309 [2024-07-11 02:49:20.525251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:30.309 02:49:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:30.309 02:49:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:43:30.309 02:49:20 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ULKnwz1GQe 00:43:30.309 02:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ULKnwz1GQe 00:43:30.567 02:49:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t4DpgJh36R 00:43:30.567 02:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t4DpgJh36R 00:43:30.825 02:49:21 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:43:30.825 02:49:21 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:43:30.825 02:49:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:30.825 02:49:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:30.825 02:49:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:31.391 02:49:21 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ULKnwz1GQe == \/\t\m\p\/\t\m\p\.\U\L\K\n\w\z\1\G\Q\e ]] 00:43:31.391 02:49:21 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:43:31.391 02:49:21 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:31.391 02:49:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:31.391 02:49:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:31.391 02:49:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:31.648 02:49:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.t4DpgJh36R == \/\t\m\p\/\t\m\p\.\t\4\D\p\g\J\h\3\6\R ]] 00:43:31.648 02:49:21 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:43:31.648 02:49:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:31.648 02:49:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:31.648 02:49:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:31.648 02:49:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:31.648 02:49:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:31.905 02:49:22 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:43:31.905 02:49:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:43:31.905 02:49:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:31.905 02:49:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:31.905 02:49:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:31.905 02:49:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:31.905 02:49:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:32.163 02:49:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:32.163 02:49:22 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:32.163 02:49:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:32.421 [2024-07-11 02:49:22.664761] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:32.421 nvme0n1 00:43:32.421 02:49:22 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:43:32.421 02:49:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:32.421 02:49:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:32.421 02:49:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:32.421 02:49:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:32.421 02:49:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:32.678 02:49:22 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:43:32.678 02:49:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:43:32.678 02:49:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:32.678 02:49:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:32.678 02:49:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:32.679 02:49:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:32.679 02:49:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:32.936 02:49:23 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:43:32.936 02:49:23 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:32.936 Running I/O for 1 seconds... 00:43:34.305 00:43:34.305 Latency(us) 00:43:34.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:34.305 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:34.305 nvme0n1 : 1.01 8327.20 32.53 0.00 0.00 15297.16 8398.32 27185.30 00:43:34.305 =================================================================================================================== 00:43:34.305 Total : 8327.20 32.53 0.00 0.00 15297.16 8398.32 27185.30 00:43:34.305 0 00:43:34.305 02:49:24 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:34.305 02:49:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:34.305 02:49:24 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:43:34.305 02:49:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:34.305 02:49:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:34.305 02:49:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:34.305 02:49:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:34.305 02:49:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:34.562 02:49:24 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:43:34.562 02:49:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:43:34.562 02:49:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:34.562 02:49:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:34.562 02:49:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:34.562 02:49:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:34.562 02:49:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:34.819 02:49:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:34.819 02:49:25 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:34.819 02:49:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:43:34.819 02:49:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:34.819 02:49:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:43:34.819 02:49:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:34.820 02:49:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:43:34.820 02:49:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:34.820 02:49:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:34.820 02:49:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:35.076 [2024-07-11 02:49:25.387414] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:35.076 [2024-07-11 02:49:25.387425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2414560 (107): Transport endpoint is not connected 00:43:35.076 [2024-07-11 02:49:25.388416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2414560 (9): Bad file descriptor 00:43:35.076 [2024-07-11 02:49:25.389431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:35.076 [2024-07-11 02:49:25.389453] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:35.076 [2024-07-11 02:49:25.389469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:35.076 request: 00:43:35.076 { 00:43:35.076 "name": "nvme0", 00:43:35.076 "trtype": "tcp", 00:43:35.076 "traddr": "127.0.0.1", 00:43:35.076 "adrfam": "ipv4", 00:43:35.076 "trsvcid": "4420", 00:43:35.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:35.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:35.076 "prchk_reftag": false, 00:43:35.076 "prchk_guard": false, 00:43:35.076 "hdgst": false, 00:43:35.076 "ddgst": false, 00:43:35.076 "psk": "key1", 00:43:35.076 "method": "bdev_nvme_attach_controller", 00:43:35.076 "req_id": 1 00:43:35.076 } 00:43:35.076 Got JSON-RPC error response 00:43:35.076 response: 00:43:35.076 { 00:43:35.076 "code": -5, 00:43:35.076 "message": "Input/output error" 00:43:35.076 } 00:43:35.076 02:49:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:43:35.076 02:49:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:35.076 02:49:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:35.076 02:49:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:35.076 02:49:25 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:43:35.076 02:49:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:35.076 02:49:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:35.076 02:49:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:35.076 02:49:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:35.076 02:49:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:35.332 02:49:25 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:43:35.332 02:49:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:43:35.332 02:49:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:35.332 02:49:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:35.333 02:49:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:35.333 02:49:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:35.333 02:49:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:35.589 02:49:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:35.589 02:49:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:43:35.589 02:49:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:35.846 02:49:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:43:35.846 02:49:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:36.103 02:49:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:43:36.103 02:49:26 keyring_file -- keyring/file.sh@77 -- # jq length 00:43:36.103 02:49:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:36.360 02:49:26 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:43:36.360 02:49:26 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ULKnwz1GQe 00:43:36.360 02:49:26 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ULKnwz1GQe 00:43:36.360 02:49:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:43:36.360 02:49:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ULKnwz1GQe 00:43:36.360 02:49:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:43:36.360 02:49:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:36.360 02:49:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:43:36.360 02:49:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:36.360 02:49:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ULKnwz1GQe 00:43:36.360 02:49:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ULKnwz1GQe 00:43:36.626 [2024-07-11 02:49:26.865287] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ULKnwz1GQe': 0100660 00:43:36.626 [2024-07-11 02:49:26.865338] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:36.626 request: 00:43:36.626 { 00:43:36.626 "name": "key0", 00:43:36.626 "path": "/tmp/tmp.ULKnwz1GQe", 00:43:36.626 "method": "keyring_file_add_key", 00:43:36.626 "req_id": 1 00:43:36.626 } 00:43:36.626 Got JSON-RPC error response 00:43:36.626 response: 00:43:36.626 { 00:43:36.626 "code": -1, 00:43:36.626 "message": "Operation not permitted" 00:43:36.626 } 00:43:36.626 02:49:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:43:36.626 02:49:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:36.626 02:49:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:36.626 02:49:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:36.626 02:49:26 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ULKnwz1GQe 00:43:36.626 02:49:26 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ULKnwz1GQe 00:43:36.626 02:49:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ULKnwz1GQe 00:43:36.906 02:49:27 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ULKnwz1GQe 00:43:36.906 02:49:27 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:43:36.906 02:49:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:36.906 02:49:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:36.906 02:49:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:36.906 02:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:36.906 02:49:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:37.164 02:49:27 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:43:37.164 02:49:27 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:37.164 02:49:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:43:37.164 02:49:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:37.164 02:49:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:43:37.164 02:49:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:37.164 02:49:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:43:37.164 02:49:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:37.164 02:49:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:37.164 02:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:37.422 [2024-07-11 02:49:27.619320] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ULKnwz1GQe': No such file or directory 00:43:37.422 [2024-07-11 02:49:27.619364] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:37.422 [2024-07-11 02:49:27.619399] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:37.422 [2024-07-11 02:49:27.619413] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:37.422 [2024-07-11 02:49:27.619426] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:37.422 request: 00:43:37.422 { 00:43:37.422 "name": "nvme0", 00:43:37.422 "trtype": "tcp", 00:43:37.422 "traddr": "127.0.0.1", 00:43:37.422 "adrfam": "ipv4", 00:43:37.422 "trsvcid": "4420", 00:43:37.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:37.422 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:37.422 "prchk_reftag": false, 00:43:37.422 "prchk_guard": false, 00:43:37.422 "hdgst": false, 00:43:37.422 "ddgst": false, 00:43:37.422 "psk": "key0", 00:43:37.422 "method": "bdev_nvme_attach_controller", 00:43:37.422 "req_id": 1 00:43:37.422 } 00:43:37.422 Got JSON-RPC error response 00:43:37.422 response: 00:43:37.422 { 00:43:37.422 "code": -19, 00:43:37.422 "message": "No such device" 00:43:37.422 } 00:43:37.422 02:49:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:43:37.422 02:49:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:37.422 02:49:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:37.422 02:49:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:37.422 02:49:27 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:43:37.422 02:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:37.680 02:49:27 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KsyvWXmSaT 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:37.680 02:49:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:37.680 02:49:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:43:37.680 02:49:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:37.680 02:49:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:43:37.680 02:49:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:43:37.680 02:49:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KsyvWXmSaT 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KsyvWXmSaT 00:43:37.680 02:49:27 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.KsyvWXmSaT 00:43:37.680 02:49:27 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KsyvWXmSaT 00:43:37.680 02:49:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KsyvWXmSaT 00:43:37.939 02:49:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:37.939 02:49:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:38.197 nvme0n1 00:43:38.197 02:49:28 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:43:38.197 02:49:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:38.197 02:49:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:38.197 02:49:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:38.197 02:49:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:38.197 02:49:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:38.455 02:49:28 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:43:38.455 02:49:28 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:43:38.455 02:49:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:38.713 02:49:28 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:43:38.713 02:49:28 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:43:38.713 02:49:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:38.713 02:49:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:38.713 02:49:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:38.972 02:49:29 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:43:38.972 02:49:29 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:43:38.972 02:49:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:38.972 02:49:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:38.972 02:49:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:38.972 02:49:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:38.972 02:49:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:39.230 02:49:29 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:43:39.230 02:49:29 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:39.230 02:49:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:39.489 02:49:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:43:39.489 02:49:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:39.489 02:49:29 keyring_file -- keyring/file.sh@104 -- # jq length 00:43:39.747 02:49:29 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:43:39.747 02:49:29 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KsyvWXmSaT 00:43:39.747 02:49:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KsyvWXmSaT 00:43:40.005 02:49:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.t4DpgJh36R 00:43:40.005 02:49:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.t4DpgJh36R 00:43:40.264 02:49:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:40.264 02:49:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:40.523 nvme0n1 00:43:40.523 02:49:30 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:43:40.523 02:49:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:40.782 02:49:31 keyring_file -- keyring/file.sh@112 -- # config='{ 00:43:40.782 "subsystems": [ 00:43:40.782 { 00:43:40.782 "subsystem": "keyring", 00:43:40.782 "config": [ 00:43:40.782 { 00:43:40.782 "method": "keyring_file_add_key", 00:43:40.782 "params": { 00:43:40.782 "name": "key0", 00:43:40.782 "path": "/tmp/tmp.KsyvWXmSaT" 00:43:40.782 } 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "method": "keyring_file_add_key", 00:43:40.782 "params": { 00:43:40.782 "name": "key1", 00:43:40.782 "path": "/tmp/tmp.t4DpgJh36R" 00:43:40.782 } 00:43:40.782 } 00:43:40.782 ] 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "subsystem": "iobuf", 00:43:40.782 "config": [ 00:43:40.782 { 00:43:40.782 "method": "iobuf_set_options", 00:43:40.782 "params": { 00:43:40.782 "small_pool_count": 8192, 00:43:40.782 "large_pool_count": 1024, 00:43:40.782 "small_bufsize": 8192, 00:43:40.782 "large_bufsize": 135168 00:43:40.782 } 00:43:40.782 } 00:43:40.782 ] 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "subsystem": "sock", 00:43:40.782 "config": [ 00:43:40.782 { 00:43:40.782 "method": "sock_set_default_impl", 00:43:40.782 "params": { 00:43:40.782 "impl_name": "posix" 00:43:40.782 } 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "method": "sock_impl_set_options", 00:43:40.782 "params": { 00:43:40.782 "impl_name": "ssl", 00:43:40.782 "recv_buf_size": 4096, 00:43:40.782 "send_buf_size": 4096, 00:43:40.782 "enable_recv_pipe": true, 00:43:40.782 "enable_quickack": false, 00:43:40.782 "enable_placement_id": 0, 00:43:40.782 "enable_zerocopy_send_server": true, 00:43:40.782 "enable_zerocopy_send_client": false, 00:43:40.782 "zerocopy_threshold": 0, 00:43:40.782 "tls_version": 0, 00:43:40.782 "enable_ktls": false 00:43:40.782 } 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "method": "sock_impl_set_options", 00:43:40.782 "params": { 00:43:40.782 "impl_name": "posix", 00:43:40.782 "recv_buf_size": 2097152, 00:43:40.782 "send_buf_size": 2097152, 00:43:40.782 "enable_recv_pipe": true, 00:43:40.782 "enable_quickack": false, 00:43:40.782 "enable_placement_id": 0, 00:43:40.782 "enable_zerocopy_send_server": true, 00:43:40.782 "enable_zerocopy_send_client": false, 00:43:40.782 "zerocopy_threshold": 0, 00:43:40.782 "tls_version": 0, 00:43:40.782 "enable_ktls": false 00:43:40.782 } 00:43:40.782 } 00:43:40.782 ] 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "subsystem": "vmd", 00:43:40.782 "config": [] 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "subsystem": "accel", 00:43:40.782 "config": [ 00:43:40.782 { 00:43:40.782 "method": "accel_set_options", 00:43:40.782 "params": { 00:43:40.782 "small_cache_size": 128, 00:43:40.782 "large_cache_size": 16, 00:43:40.782 "task_count": 2048, 00:43:40.782 "sequence_count": 2048, 00:43:40.782 "buf_count": 2048 00:43:40.782 } 00:43:40.782 } 00:43:40.782 ] 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "subsystem": "bdev", 00:43:40.782 "config": [ 00:43:40.782 { 00:43:40.782 "method": "bdev_set_options", 00:43:40.782 "params": { 00:43:40.782 "bdev_io_pool_size": 65535, 00:43:40.782 "bdev_io_cache_size": 256, 00:43:40.782 "bdev_auto_examine": true, 00:43:40.782 "iobuf_small_cache_size": 128, 00:43:40.782 "iobuf_large_cache_size": 16 00:43:40.782 } 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "method": "bdev_raid_set_options", 00:43:40.782 "params": { 00:43:40.782 "process_window_size_kb": 1024 00:43:40.782 } 00:43:40.782 }, 00:43:40.782 { 00:43:40.782 "method": "bdev_iscsi_set_options", 00:43:40.782 "params": { 00:43:40.782 "timeout_sec": 30 00:43:40.782 } 00:43:40.783 }, 00:43:40.783 { 00:43:40.783 "method": "bdev_nvme_set_options", 00:43:40.783 "params": { 00:43:40.783 "action_on_timeout": "none", 00:43:40.783 "timeout_us": 0, 00:43:40.783 "timeout_admin_us": 0, 00:43:40.783 "keep_alive_timeout_ms": 10000, 00:43:40.783 "arbitration_burst": 0, 00:43:40.783 "low_priority_weight": 0, 00:43:40.783 "medium_priority_weight": 0, 00:43:40.783 "high_priority_weight": 0, 00:43:40.783 "nvme_adminq_poll_period_us": 10000, 00:43:40.783 "nvme_ioq_poll_period_us": 0, 00:43:40.783 "io_queue_requests": 512, 00:43:40.783 "delay_cmd_submit": true, 00:43:40.783 "transport_retry_count": 4, 00:43:40.783 "bdev_retry_count": 3, 00:43:40.783 "transport_ack_timeout": 0, 00:43:40.783 "ctrlr_loss_timeout_sec": 0, 00:43:40.783 "reconnect_delay_sec": 0, 00:43:40.783 "fast_io_fail_timeout_sec": 0, 00:43:40.783 "disable_auto_failback": false, 00:43:40.783 "generate_uuids": false, 00:43:40.783 "transport_tos": 0, 00:43:40.783 "nvme_error_stat": false, 00:43:40.783 "rdma_srq_size": 0, 00:43:40.783 "io_path_stat": false, 00:43:40.783 "allow_accel_sequence": false, 00:43:40.783 "rdma_max_cq_size": 0, 00:43:40.783 "rdma_cm_event_timeout_ms": 0, 00:43:40.783 "dhchap_digests": [ 00:43:40.783 "sha256", 00:43:40.783 "sha384", 00:43:40.783 "sha512" 00:43:40.783 ], 00:43:40.783 "dhchap_dhgroups": [ 00:43:40.783 "null", 00:43:40.783 "ffdhe2048", 00:43:40.783 "ffdhe3072", 00:43:40.783 "ffdhe4096", 00:43:40.783 "ffdhe6144", 00:43:40.783 "ffdhe8192" 00:43:40.783 ] 00:43:40.783 } 00:43:40.783 }, 00:43:40.783 { 00:43:40.783 "method": "bdev_nvme_attach_controller", 00:43:40.783 "params": { 00:43:40.783 "name": "nvme0", 00:43:40.783 "trtype": "TCP", 00:43:40.783 "adrfam": "IPv4", 00:43:40.783 "traddr": "127.0.0.1", 00:43:40.783 "trsvcid": "4420", 00:43:40.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:40.783 "prchk_reftag": false, 00:43:40.783 "prchk_guard": false, 00:43:40.783 "ctrlr_loss_timeout_sec": 0, 00:43:40.783 "reconnect_delay_sec": 0, 00:43:40.783 "fast_io_fail_timeout_sec": 0, 00:43:40.783 "psk": "key0", 00:43:40.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:40.783 "hdgst": false, 00:43:40.783 "ddgst": false 00:43:40.783 } 00:43:40.783 }, 00:43:40.783 { 00:43:40.783 "method": "bdev_nvme_set_hotplug", 00:43:40.783 "params": { 00:43:40.783 "period_us": 100000, 00:43:40.783 "enable": false 00:43:40.783 } 00:43:40.783 }, 00:43:40.783 { 00:43:40.783 "method": "bdev_wait_for_examine" 00:43:40.783 } 00:43:40.783 ] 00:43:40.783 }, 00:43:40.783 { 00:43:40.783 "subsystem": "nbd", 00:43:40.783 "config": [] 00:43:40.783 } 00:43:40.783 ] 00:43:40.783 }' 00:43:40.783 02:49:31 keyring_file -- keyring/file.sh@114 -- # killprocess 2001732 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2001732 ']' 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2001732 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@953 -- # uname 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001732 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001732' 00:43:40.783 killing process with pid 2001732 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@967 -- # kill 2001732 00:43:40.783 Received shutdown signal, test time was about 1.000000 seconds 00:43:40.783 00:43:40.783 Latency(us) 00:43:40.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:40.783 =================================================================================================================== 00:43:40.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:40.783 02:49:31 keyring_file -- common/autotest_common.sh@972 -- # wait 2001732 00:43:41.042 02:49:31 keyring_file -- keyring/file.sh@117 -- # bperfpid=2002873 00:43:41.042 02:49:31 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2002873 /var/tmp/bperf.sock 00:43:41.042 02:49:31 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2002873 ']' 00:43:41.042 02:49:31 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:41.042 02:49:31 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:41.042 02:49:31 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:41.042 02:49:31 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:41.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:41.042 02:49:31 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:41.042 02:49:31 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:43:41.042 "subsystems": [ 00:43:41.042 { 00:43:41.042 "subsystem": "keyring", 00:43:41.042 "config": [ 00:43:41.042 { 00:43:41.042 "method": "keyring_file_add_key", 00:43:41.042 "params": { 00:43:41.042 "name": "key0", 00:43:41.042 "path": "/tmp/tmp.KsyvWXmSaT" 00:43:41.042 } 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "method": "keyring_file_add_key", 00:43:41.042 "params": { 00:43:41.042 "name": "key1", 00:43:41.042 "path": "/tmp/tmp.t4DpgJh36R" 00:43:41.042 } 00:43:41.042 } 00:43:41.042 ] 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "subsystem": "iobuf", 00:43:41.042 "config": [ 00:43:41.042 { 00:43:41.042 "method": "iobuf_set_options", 00:43:41.042 "params": { 00:43:41.042 "small_pool_count": 8192, 00:43:41.042 "large_pool_count": 1024, 00:43:41.042 "small_bufsize": 8192, 00:43:41.042 "large_bufsize": 135168 00:43:41.042 } 00:43:41.042 } 00:43:41.042 ] 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "subsystem": "sock", 00:43:41.042 "config": [ 00:43:41.042 { 00:43:41.042 "method": "sock_set_default_impl", 00:43:41.042 "params": { 00:43:41.042 "impl_name": "posix" 00:43:41.042 } 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "method": "sock_impl_set_options", 00:43:41.042 "params": { 00:43:41.042 "impl_name": "ssl", 00:43:41.042 "recv_buf_size": 4096, 00:43:41.042 "send_buf_size": 4096, 00:43:41.042 "enable_recv_pipe": true, 00:43:41.042 "enable_quickack": false, 00:43:41.042 "enable_placement_id": 0, 00:43:41.042 "enable_zerocopy_send_server": true, 00:43:41.042 "enable_zerocopy_send_client": false, 00:43:41.042 "zerocopy_threshold": 0, 00:43:41.042 "tls_version": 0, 00:43:41.042 "enable_ktls": false 00:43:41.042 } 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "method": "sock_impl_set_options", 00:43:41.042 "params": { 00:43:41.042 "impl_name": "posix", 00:43:41.042 "recv_buf_size": 2097152, 00:43:41.042 "send_buf_size": 2097152, 00:43:41.042 "enable_recv_pipe": true, 00:43:41.042 "enable_quickack": false, 00:43:41.042 "enable_placement_id": 0, 00:43:41.042 "enable_zerocopy_send_server": true, 00:43:41.042 "enable_zerocopy_send_client": false, 00:43:41.042 "zerocopy_threshold": 0, 00:43:41.042 "tls_version": 0, 00:43:41.042 "enable_ktls": false 00:43:41.042 } 00:43:41.042 } 00:43:41.042 ] 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "subsystem": "vmd", 00:43:41.042 "config": [] 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "subsystem": "accel", 00:43:41.042 "config": [ 00:43:41.042 { 00:43:41.042 "method": "accel_set_options", 00:43:41.042 "params": { 00:43:41.042 "small_cache_size": 128, 00:43:41.042 "large_cache_size": 16, 00:43:41.042 "task_count": 2048, 00:43:41.042 "sequence_count": 2048, 00:43:41.042 "buf_count": 2048 00:43:41.042 } 00:43:41.042 } 00:43:41.042 ] 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "subsystem": "bdev", 00:43:41.042 "config": [ 00:43:41.042 { 00:43:41.042 "method": "bdev_set_options", 00:43:41.042 "params": { 00:43:41.042 "bdev_io_pool_size": 65535, 00:43:41.042 "bdev_io_cache_size": 256, 00:43:41.042 "bdev_auto_examine": true, 00:43:41.042 "iobuf_small_cache_size": 128, 00:43:41.042 "iobuf_large_cache_size": 16 00:43:41.042 } 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "method": "bdev_raid_set_options", 00:43:41.042 "params": { 00:43:41.042 "process_window_size_kb": 1024 00:43:41.042 } 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "method": "bdev_iscsi_set_options", 00:43:41.042 "params": { 00:43:41.042 "timeout_sec": 30 00:43:41.042 } 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "method": "bdev_nvme_set_options", 00:43:41.042 "params": { 00:43:41.042 "action_on_timeout": "none", 00:43:41.042 "timeout_us": 0, 00:43:41.042 "timeout_admin_us": 0, 00:43:41.042 "keep_alive_timeout_ms": 10000, 00:43:41.042 "arbitration_burst": 0, 00:43:41.042 "low_priority_weight": 0, 00:43:41.042 "medium_priority_weight": 0, 00:43:41.042 "high_priority_weight": 0, 00:43:41.042 "nvme_adminq_poll_period_us": 10000, 00:43:41.042 "nvme_ioq_poll_period_us": 0, 00:43:41.042 "io_queue_requests": 512, 00:43:41.042 "delay_cmd_submit": true, 00:43:41.042 "transport_retry_count": 4, 00:43:41.042 "bdev_retry_count": 3, 00:43:41.042 "transport_ack_timeout": 0, 00:43:41.042 "ctrlr_loss_timeout_sec": 0, 00:43:41.042 "reconnect_delay_sec": 0, 00:43:41.042 "fast_io_fail_timeout_sec": 0, 00:43:41.042 "disable_auto_failback": false, 00:43:41.042 "generate_uuids": false, 00:43:41.042 "transport_tos": 0, 00:43:41.042 "nvme_error_stat": false, 00:43:41.042 "rdma_srq_size": 0, 00:43:41.042 "io_path_stat": false, 00:43:41.042 "allow_accel_sequence": false, 00:43:41.042 "rdma_max_cq_size": 0, 00:43:41.042 "rdma_cm_event_timeout_ms": 0, 00:43:41.042 "dhchap_digests": [ 00:43:41.042 "sha256", 00:43:41.042 "sha384", 00:43:41.042 "sha512" 00:43:41.042 ], 00:43:41.042 "dhchap_dhgroups": [ 00:43:41.042 "null", 00:43:41.042 "ffdhe2048", 00:43:41.042 "ffdhe3072", 00:43:41.042 "ffdhe4096", 00:43:41.042 "ffdhe6144", 00:43:41.042 "ffdhe8192" 00:43:41.042 ] 00:43:41.042 } 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "method": "bdev_nvme_attach_controller", 00:43:41.042 "params": { 00:43:41.042 "name": "nvme0", 00:43:41.042 "trtype": "TCP", 00:43:41.042 "adrfam": "IPv4", 00:43:41.042 "traddr": "127.0.0.1", 00:43:41.042 "trsvcid": "4420", 00:43:41.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:41.042 "prchk_reftag": false, 00:43:41.042 "prchk_guard": false, 00:43:41.042 "ctrlr_loss_timeout_sec": 0, 00:43:41.042 "reconnect_delay_sec": 0, 00:43:41.042 "fast_io_fail_timeout_sec": 0, 00:43:41.042 "psk": "key0", 00:43:41.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:41.042 "hdgst": false, 00:43:41.042 "ddgst": false 00:43:41.042 } 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "method": "bdev_nvme_set_hotplug", 00:43:41.042 "params": { 00:43:41.042 "period_us": 100000, 00:43:41.042 "enable": false 00:43:41.042 } 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "method": "bdev_wait_for_examine" 00:43:41.042 } 00:43:41.042 ] 00:43:41.042 }, 00:43:41.042 { 00:43:41.042 "subsystem": "nbd", 00:43:41.042 "config": [] 00:43:41.042 } 00:43:41.042 ] 00:43:41.042 }' 00:43:41.042 02:49:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:41.042 [2024-07-11 02:49:31.347062] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:43:41.043 [2024-07-11 02:49:31.347167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002873 ] 00:43:41.043 EAL: No free 2048 kB hugepages reported on node 1 00:43:41.043 [2024-07-11 02:49:31.407776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:41.301 [2024-07-11 02:49:31.498921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:41.301 [2024-07-11 02:49:31.673035] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:41.558 02:49:31 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:41.558 02:49:31 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:43:41.558 02:49:31 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:43:41.558 02:49:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:41.558 02:49:31 keyring_file -- keyring/file.sh@120 -- # jq length 00:43:41.816 02:49:32 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:43:41.816 02:49:32 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:43:41.816 02:49:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:41.816 02:49:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:41.816 02:49:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:41.816 02:49:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:41.816 02:49:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:42.074 02:49:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:42.074 02:49:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:43:42.074 02:49:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:42.074 02:49:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:42.074 02:49:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:42.074 02:49:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:42.074 02:49:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:42.331 02:49:32 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:43:42.331 02:49:32 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:43:42.331 02:49:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:42.331 02:49:32 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:43:42.589 02:49:32 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:43:42.589 02:49:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:42.589 02:49:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.KsyvWXmSaT /tmp/tmp.t4DpgJh36R 00:43:42.589 02:49:32 keyring_file -- keyring/file.sh@20 -- # killprocess 2002873 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2002873 ']' 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2002873 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@953 -- # uname 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2002873 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2002873' 00:43:42.589 killing process with pid 2002873 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@967 -- # kill 2002873 00:43:42.589 Received shutdown signal, test time was about 1.000000 seconds 00:43:42.589 00:43:42.589 Latency(us) 00:43:42.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:42.589 =================================================================================================================== 00:43:42.589 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:42.589 02:49:32 keyring_file -- common/autotest_common.sh@972 -- # wait 2002873 00:43:42.847 02:49:33 keyring_file -- keyring/file.sh@21 -- # killprocess 2001716 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2001716 ']' 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2001716 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001716 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001716' 00:43:42.847 killing process with pid 2001716 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@967 -- # kill 2001716 00:43:42.847 [2024-07-11 02:49:33.074458] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:43:42.847 02:49:33 keyring_file -- common/autotest_common.sh@972 -- # wait 2001716 00:43:43.106 00:43:43.106 real 0m13.699s 00:43:43.106 user 0m35.107s 00:43:43.106 sys 0m3.040s 00:43:43.106 02:49:33 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:43.106 02:49:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:43.106 ************************************ 00:43:43.106 END TEST keyring_file 00:43:43.106 ************************************ 00:43:43.106 02:49:33 -- common/autotest_common.sh@1142 -- # return 0 00:43:43.106 02:49:33 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:43:43.106 02:49:33 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:43.106 02:49:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:43.106 02:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:43.106 02:49:33 -- common/autotest_common.sh@10 -- # set +x 00:43:43.106 ************************************ 00:43:43.106 START TEST keyring_linux 00:43:43.106 ************************************ 00:43:43.106 02:49:33 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:43.106 * Looking for test storage... 00:43:43.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:43.106 02:49:33 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:43.106 02:49:33 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:43.106 02:49:33 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:43.106 02:49:33 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:43.106 02:49:33 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:43.106 02:49:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:43.106 02:49:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:43.106 02:49:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:43.106 02:49:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:43.106 02:49:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:43.106 02:49:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:43.106 02:49:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:43.106 02:49:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:43.106 02:49:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:43.106 02:49:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:43.106 02:49:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:43.106 02:49:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:43.106 02:49:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:43.106 02:49:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:43.106 02:49:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:43.106 02:49:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:43.106 02:49:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:43.106 02:49:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:43:43.106 02:49:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:43:43.106 02:49:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:43.364 02:49:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:43.364 /tmp/:spdk-test:key0 00:43:43.364 02:49:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:43.364 02:49:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:43.365 02:49:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:43.365 02:49:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:43.365 02:49:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:43.365 02:49:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:43.365 02:49:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:43.365 02:49:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:43.365 02:49:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:43:43.365 02:49:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:43.365 02:49:33 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:43:43.365 02:49:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:43:43.365 02:49:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:43:43.365 02:49:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:43.365 02:49:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:43.365 /tmp/:spdk-test:key1 00:43:43.365 02:49:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2003159 00:43:43.365 02:49:33 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:43.365 02:49:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2003159 00:43:43.365 02:49:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2003159 ']' 00:43:43.365 02:49:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:43.365 02:49:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:43.365 02:49:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:43.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:43.365 02:49:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:43.365 02:49:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:43.365 [2024-07-11 02:49:33.631754] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:43:43.365 [2024-07-11 02:49:33.631847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003159 ] 00:43:43.365 EAL: No free 2048 kB hugepages reported on node 1 00:43:43.365 [2024-07-11 02:49:33.690816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:43.365 [2024-07-11 02:49:33.778093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:43.622 02:49:33 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:43.622 02:49:33 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:43:43.622 02:49:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:43.622 02:49:33 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:43.622 02:49:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:43.622 [2024-07-11 02:49:33.990465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:43.622 null0 00:43:43.622 [2024-07-11 02:49:34.022506] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:43.622 [2024-07-11 02:49:34.022863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:43.622 02:49:34 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:43.880 02:49:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:43.880 403865201 00:43:43.880 02:49:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:43.880 285104435 00:43:43.880 02:49:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2003254 00:43:43.880 02:49:34 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:43.880 02:49:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2003254 /var/tmp/bperf.sock 00:43:43.880 02:49:34 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2003254 ']' 00:43:43.880 02:49:34 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:43.880 02:49:34 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:43.880 02:49:34 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:43.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:43.880 02:49:34 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:43.880 02:49:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:43.880 [2024-07-11 02:49:34.081207] Starting SPDK v24.09-pre git sha1 9937c0160 / DPDK 22.11.4 initialization... 00:43:43.880 [2024-07-11 02:49:34.081287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003254 ] 00:43:43.880 EAL: No free 2048 kB hugepages reported on node 1 00:43:43.880 [2024-07-11 02:49:34.134502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:43.880 [2024-07-11 02:49:34.221937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:44.137 02:49:34 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:44.137 02:49:34 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:43:44.137 02:49:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:44.137 02:49:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:44.395 02:49:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:44.395 02:49:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:44.653 02:49:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:44.653 02:49:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:44.911 [2024-07-11 02:49:35.131961] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:44.911 nvme0n1 00:43:44.911 02:49:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:44.911 02:49:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:44.911 02:49:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:44.911 02:49:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:44.911 02:49:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:44.911 02:49:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:45.168 02:49:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:45.168 02:49:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:45.168 02:49:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:45.168 02:49:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:45.168 02:49:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:45.168 02:49:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:45.168 02:49:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:45.425 02:49:35 keyring_linux -- keyring/linux.sh@25 -- # sn=403865201 00:43:45.425 02:49:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:45.425 02:49:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:45.425 02:49:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 403865201 == \4\0\3\8\6\5\2\0\1 ]] 00:43:45.425 02:49:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 403865201 00:43:45.425 02:49:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:45.425 02:49:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:45.683 Running I/O for 1 seconds... 00:43:46.617 00:43:46.617 Latency(us) 00:43:46.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:46.617 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:46.617 nvme0n1 : 1.01 8979.66 35.08 0.00 0.00 14143.58 9223.59 23495.87 00:43:46.617 =================================================================================================================== 00:43:46.617 Total : 8979.66 35.08 0.00 0.00 14143.58 9223.59 23495.87 00:43:46.617 0 00:43:46.617 02:49:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:46.617 02:49:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:46.876 02:49:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:46.876 02:49:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:46.876 02:49:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:46.876 02:49:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:46.876 02:49:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:46.876 02:49:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:47.442 02:49:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:47.442 02:49:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:47.442 02:49:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:47.442 02:49:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:47.442 02:49:37 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:43:47.442 02:49:37 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:47.442 02:49:37 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:43:47.442 02:49:37 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:47.442 02:49:37 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:43:47.442 02:49:37 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:47.442 02:49:37 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:47.442 02:49:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:47.442 [2024-07-11 02:49:37.857393] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:47.442 [2024-07-11 02:49:37.857671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf59c0 (107): Transport endpoint is not connected 00:43:47.442 [2024-07-11 02:49:37.858662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf59c0 (9): Bad file descriptor 00:43:47.442 [2024-07-11 02:49:37.859661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:47.442 [2024-07-11 02:49:37.859684] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:47.442 [2024-07-11 02:49:37.859700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:47.702 request: 00:43:47.702 { 00:43:47.702 "name": "nvme0", 00:43:47.702 "trtype": "tcp", 00:43:47.702 "traddr": "127.0.0.1", 00:43:47.702 "adrfam": "ipv4", 00:43:47.702 "trsvcid": "4420", 00:43:47.702 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:47.702 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:47.702 "prchk_reftag": false, 00:43:47.702 "prchk_guard": false, 00:43:47.702 "hdgst": false, 00:43:47.702 "ddgst": false, 00:43:47.702 "psk": ":spdk-test:key1", 00:43:47.702 "method": "bdev_nvme_attach_controller", 00:43:47.702 "req_id": 1 00:43:47.702 } 00:43:47.702 Got JSON-RPC error response 00:43:47.702 response: 00:43:47.702 { 00:43:47.702 "code": -5, 00:43:47.702 "message": "Input/output error" 00:43:47.702 } 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@33 -- # sn=403865201 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 403865201 00:43:47.702 1 links removed 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@33 -- # sn=285104435 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 285104435 00:43:47.702 1 links removed 00:43:47.702 02:49:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2003254 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2003254 ']' 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2003254 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2003254 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2003254' 00:43:47.702 killing process with pid 2003254 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@967 -- # kill 2003254 00:43:47.702 Received shutdown signal, test time was about 1.000000 seconds 00:43:47.702 00:43:47.702 Latency(us) 00:43:47.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:47.702 =================================================================================================================== 00:43:47.702 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:47.702 02:49:37 keyring_linux -- common/autotest_common.sh@972 -- # wait 2003254 00:43:47.702 02:49:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2003159 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2003159 ']' 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2003159 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2003159 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2003159' 00:43:47.702 killing process with pid 2003159 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@967 -- # kill 2003159 00:43:47.702 02:49:38 keyring_linux -- common/autotest_common.sh@972 -- # wait 2003159 00:43:47.961 00:43:47.961 real 0m4.946s 00:43:47.961 user 0m10.130s 00:43:47.961 sys 0m1.534s 00:43:47.961 02:49:38 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:47.961 02:49:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:47.961 ************************************ 00:43:47.961 END TEST keyring_linux 00:43:47.961 ************************************ 00:43:47.961 02:49:38 -- common/autotest_common.sh@1142 -- # return 0 00:43:47.961 02:49:38 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:43:47.961 02:49:38 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:43:47.961 02:49:38 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:43:47.961 02:49:38 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:43:47.961 02:49:38 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:43:47.961 02:49:38 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:43:47.961 02:49:38 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:43:47.962 02:49:38 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:43:47.962 02:49:38 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:43:47.962 02:49:38 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:43:47.962 02:49:38 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:43:47.962 02:49:38 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:43:47.962 02:49:38 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:43:47.962 02:49:38 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:43:47.962 02:49:38 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:43:47.962 02:49:38 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:43:47.962 02:49:38 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:43:47.962 02:49:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:43:47.962 02:49:38 -- common/autotest_common.sh@10 -- # set +x 00:43:47.962 02:49:38 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:43:47.962 02:49:38 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:43:47.962 02:49:38 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:43:47.962 02:49:38 -- common/autotest_common.sh@10 -- # set +x 00:43:49.866 INFO: APP EXITING 00:43:49.866 INFO: killing all VMs 00:43:49.866 INFO: killing vhost app 00:43:49.866 WARN: no vhost pid file found 00:43:49.866 INFO: EXIT DONE 00:43:50.433 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:43:50.433 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:43:50.433 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:43:50.433 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:43:50.433 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:43:50.433 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:43:50.433 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:43:50.433 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:43:50.433 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:43:50.433 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:43:50.433 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:43:50.692 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:43:50.692 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:43:50.692 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:43:50.692 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:43:50.692 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:43:50.692 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:43:51.629 Cleaning 00:43:51.629 Removing: /var/run/dpdk/spdk0/config 00:43:51.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:51.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:51.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:51.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:51.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:51.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:51.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:51.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:51.629 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:51.629 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:51.629 Removing: /var/run/dpdk/spdk1/config 00:43:51.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:51.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:51.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:51.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:51.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:51.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:51.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:51.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:51.629 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:51.629 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:51.629 Removing: /var/run/dpdk/spdk1/mp_socket 00:43:51.629 Removing: /var/run/dpdk/spdk2/config 00:43:51.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:51.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:51.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:51.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:51.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:51.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:51.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:51.630 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:51.630 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:51.630 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:51.630 Removing: /var/run/dpdk/spdk3/config 00:43:51.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:51.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:51.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:51.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:51.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:51.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:51.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:51.630 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:51.630 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:51.630 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:51.630 Removing: /var/run/dpdk/spdk4/config 00:43:51.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:51.630 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:51.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:51.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:51.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:51.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:51.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:51.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:51.888 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:51.888 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:51.888 Removing: /dev/shm/bdev_svc_trace.1 00:43:51.888 Removing: /dev/shm/nvmf_trace.0 00:43:51.888 Removing: /dev/shm/spdk_tgt_trace.pid1743397 00:43:51.888 Removing: /var/run/dpdk/spdk0 00:43:51.888 Removing: /var/run/dpdk/spdk1 00:43:51.888 Removing: /var/run/dpdk/spdk2 00:43:51.888 Removing: /var/run/dpdk/spdk3 00:43:51.888 Removing: /var/run/dpdk/spdk4 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1742175 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1742738 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1743397 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1743770 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1744292 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1744379 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1745025 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1745078 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1745291 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1746835 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1747550 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1747712 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1747866 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1748034 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1748193 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1748316 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1748527 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1748673 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1748927 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1750958 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1751090 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1751220 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1751227 00:43:51.888 Removing: /var/run/dpdk/spdk_pid1751474 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1751561 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1751808 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1751896 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1752037 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1752047 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1752177 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1752273 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1752575 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1752703 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1752868 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1753004 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1753110 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1753175 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1753306 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1753476 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1753635 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1753756 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1753880 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1754061 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1754214 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1754339 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1754465 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1754652 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1754797 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1754919 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1755045 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1755249 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1755374 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1755499 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1755635 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1755840 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1755967 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1756087 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1756248 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1756420 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1757942 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1801409 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1803341 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1808796 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1811277 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1813048 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1813358 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1816438 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1819799 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1819817 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1820568 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1821060 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1821557 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1821856 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1821864 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1821976 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1822073 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1822084 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1822578 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1823075 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1823569 00:43:51.889 Removing: /var/run/dpdk/spdk_pid1823882 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1823885 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1824080 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1824766 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1825329 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1829676 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1829942 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1833012 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1836383 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1838134 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1842972 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1847595 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1848556 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1849087 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1856986 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1858598 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1883040 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1885281 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1886179 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1887167 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1887275 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1887294 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1887400 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1887739 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1888729 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1889280 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1889530 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1891139 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1891704 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1892053 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1893900 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1896401 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1899174 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1917467 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1919573 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1922477 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1923224 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1924074 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1926051 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1927786 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1930958 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1930966 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1933103 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1933205 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1933395 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1933600 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1933605 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1934509 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1935407 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1936385 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1937272 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1938180 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1939268 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1942709 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1943021 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1944034 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1944682 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1947521 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1949014 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1951642 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1954341 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1959407 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1962782 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1962784 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1973447 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1973782 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1974099 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1974496 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1974945 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1975263 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1975568 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1975966 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1977809 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1977944 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1980822 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1980965 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1982258 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1986086 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1986138 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1988343 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1989405 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1990577 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1991695 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1992808 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1993479 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1997477 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1997754 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1998050 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1999268 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1999573 00:43:52.148 Removing: /var/run/dpdk/spdk_pid1999878 00:43:52.148 Removing: /var/run/dpdk/spdk_pid2001716 00:43:52.148 Removing: /var/run/dpdk/spdk_pid2001732 00:43:52.148 Removing: /var/run/dpdk/spdk_pid2002873 00:43:52.148 Removing: /var/run/dpdk/spdk_pid2003159 00:43:52.148 Removing: /var/run/dpdk/spdk_pid2003254 00:43:52.148 Clean 00:43:52.407 02:49:42 -- common/autotest_common.sh@1451 -- # return 0 00:43:52.407 02:49:42 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:43:52.407 02:49:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:52.407 02:49:42 -- common/autotest_common.sh@10 -- # set +x 00:43:52.407 02:49:42 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:43:52.407 02:49:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:43:52.407 02:49:42 -- common/autotest_common.sh@10 -- # set +x 00:43:52.407 02:49:42 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:52.407 02:49:42 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:52.407 02:49:42 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:52.407 02:49:42 -- spdk/autotest.sh@391 -- # hash lcov 00:43:52.407 02:49:42 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:43:52.407 02:49:42 -- spdk/autotest.sh@393 -- # hostname 00:43:52.407 02:49:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:52.665 geninfo: WARNING: invalid characters removed from testname! 00:44:24.805 02:50:12 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:26.178 02:50:16 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:29.457 02:50:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:31.996 02:50:22 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:35.284 02:50:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:38.566 02:50:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:41.102 02:50:31 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:41.102 02:50:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:41.102 02:50:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:41.102 02:50:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.102 02:50:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.102 02:50:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.102 02:50:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.102 02:50:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.102 02:50:31 -- paths/export.sh@5 -- $ export PATH 00:44:41.102 02:50:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.102 02:50:31 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:44:41.103 02:50:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:44:41.103 02:50:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720659031.XXXXXX 00:44:41.103 02:50:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720659031.vJskWZ 00:44:41.103 02:50:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:44:41.103 02:50:31 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:44:41.103 02:50:31 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:44:41.103 02:50:31 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:44:41.103 02:50:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:44:41.103 02:50:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:44:41.103 02:50:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:44:41.103 02:50:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:44:41.103 02:50:31 -- common/autotest_common.sh@10 -- $ set +x 00:44:41.103 02:50:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:44:41.103 02:50:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:44:41.103 02:50:31 -- pm/common@17 -- $ local monitor 00:44:41.103 02:50:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:41.103 02:50:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:41.103 02:50:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:41.103 02:50:31 -- pm/common@21 -- $ date +%s 00:44:41.103 02:50:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:41.103 02:50:31 -- pm/common@21 -- $ date +%s 00:44:41.103 02:50:31 -- pm/common@25 -- $ sleep 1 00:44:41.103 02:50:31 -- pm/common@21 -- $ date +%s 00:44:41.103 02:50:31 -- pm/common@21 -- $ date +%s 00:44:41.103 02:50:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720659031 00:44:41.103 02:50:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720659031 00:44:41.103 02:50:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720659031 00:44:41.103 02:50:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720659031 00:44:41.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720659031_collect-vmstat.pm.log 00:44:41.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720659031_collect-cpu-load.pm.log 00:44:41.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720659031_collect-cpu-temp.pm.log 00:44:41.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720659031_collect-bmc-pm.bmc.pm.log 00:44:42.039 02:50:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:44:42.039 02:50:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:44:42.039 02:50:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:42.039 02:50:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:44:42.039 02:50:32 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:44:42.039 02:50:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:44:42.039 02:50:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:44:42.039 02:50:32 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:42.039 02:50:32 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:44:42.040 02:50:32 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:42.298 02:50:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:44:42.298 02:50:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:44:42.298 02:50:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:44:42.298 02:50:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:44:42.298 02:50:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:42.298 02:50:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:44:42.298 02:50:32 -- pm/common@44 -- $ pid=2013496 00:44:42.298 02:50:32 -- pm/common@50 -- $ kill -TERM 2013496 00:44:42.298 02:50:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:42.298 02:50:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:44:42.298 02:50:32 -- pm/common@44 -- $ pid=2013498 00:44:42.298 02:50:32 -- pm/common@50 -- $ kill -TERM 2013498 00:44:42.298 02:50:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:42.298 02:50:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:44:42.298 02:50:32 -- pm/common@44 -- $ pid=2013500 00:44:42.298 02:50:32 -- pm/common@50 -- $ kill -TERM 2013500 00:44:42.298 02:50:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:42.298 02:50:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:44:42.298 02:50:32 -- pm/common@44 -- $ pid=2013532 00:44:42.298 02:50:32 -- pm/common@50 -- $ sudo -E kill -TERM 2013532 00:44:42.298 + [[ -n 1646405 ]] 00:44:42.298 + sudo kill 1646405 00:44:42.309 [Pipeline] } 00:44:42.327 [Pipeline] // stage 00:44:42.333 [Pipeline] } 00:44:42.351 [Pipeline] // timeout 00:44:42.357 [Pipeline] } 00:44:42.374 [Pipeline] // catchError 00:44:42.379 [Pipeline] } 00:44:42.398 [Pipeline] // wrap 00:44:42.406 [Pipeline] } 00:44:42.422 [Pipeline] // catchError 00:44:42.431 [Pipeline] stage 00:44:42.432 [Pipeline] { (Epilogue) 00:44:42.445 [Pipeline] catchError 00:44:42.447 [Pipeline] { 00:44:42.458 [Pipeline] echo 00:44:42.460 Cleanup processes 00:44:42.466 [Pipeline] sh 00:44:42.750 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:42.750 2013683 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:44:42.750 2013713 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:42.763 [Pipeline] sh 00:44:43.045 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:43.045 ++ grep -v 'sudo pgrep' 00:44:43.045 ++ awk '{print $1}' 00:44:43.045 + sudo kill -9 2013683 00:44:43.057 [Pipeline] sh 00:44:43.336 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:53.355 [Pipeline] sh 00:44:53.637 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:53.637 Artifacts sizes are good 00:44:53.650 [Pipeline] archiveArtifacts 00:44:53.657 Archiving artifacts 00:44:53.873 [Pipeline] sh 00:44:54.153 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:54.166 [Pipeline] cleanWs 00:44:54.175 [WS-CLEANUP] Deleting project workspace... 00:44:54.175 [WS-CLEANUP] Deferred wipeout is used... 00:44:54.181 [WS-CLEANUP] done 00:44:54.183 [Pipeline] } 00:44:54.201 [Pipeline] // catchError 00:44:54.212 [Pipeline] sh 00:44:54.501 + logger -p user.info -t JENKINS-CI 00:44:54.510 [Pipeline] } 00:44:54.525 [Pipeline] // stage 00:44:54.530 [Pipeline] } 00:44:54.547 [Pipeline] // node 00:44:54.552 [Pipeline] End of Pipeline 00:44:54.586 Finished: SUCCESS